문제

I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc..

I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of?

To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

올바른 솔루션이 없습니다

다른 팁

I think Drupal 7 stands a chance of being a killer semantic app, it's the next version of a CMS with a sizeable install base and when the release is final and everyone starts upgrading suddenly tons of sites will be automatically exposing RDF in the form of embedded RDFa without any intervention on the part of users.

From a business viewpoint I think things linked data stuff like GoodRelations are a potential killer app, for example see these Scott Brinker and Priyank Mohan blog posts which discusses how BestBuy experienced a 30% increase in traffic after starting embedding GoodRelations based linked data as RDFa in their web pages.

In general linked data is an excellent way to augment your site with machine readable data and shows remarkable results in the visibility of your site, I've recently seen a couple of presentations by guys from the BBC whose wildlife finder application (a linked data application using BBC natural history resources) already outperforms some Wikipedia pages for animals in Google search results

In biology, the interest in RDF & related technology is very high. People want both less parsing and custom code to integrate data, and more advanced queries. RDF is already providing the former; for example, UniProtKB, one of the biggest biomedical resources, are offering their data in RDF. For advanced querying we're not quite there yet, as performance and availability of good RDF data and OWL ontologies is still a bit lacking. But it's starting, check out BioGateway for an example of what can be done.

In that light, the fruits of these technologies are not individual killer apps, but all the accumulated time savings for researchers who can explore the data instead of writing ad hoc scripts and setting up temporary SQL databases and all this plumbing.

콘텐츠 데이터베이스에서 조회가있는 AuditData 테이블을 확인할 수 있습니다. 그런 다음 코드가 문제가되는지 확인합니다.

코드가 문제가되는 경우 다음을 확인하십시오.

사용자는 SharePoint 사용자가 사이트의 SharePoint 사용자 정보 목록에 추가되지 않는 광고 그룹에서만 사용 권한을 할당받은 사용자입니다. SiteCollURL / _Catalogs / Users / Simple.aspx를 체크인하면 해당 사용자를 찾지 못하지만 SharePoint 주체로 추가 된 광고 그룹을 찾을 수 있습니다. 해당 광고 그룹의 모든 사용자 구성원을 SharePoint 사이트 (예 : 더미 그룹에 추가하여 코드를 실행하여 코드를 다시 실행하십시오.

문제와 관련이 없지만 코드를 향상시키는 코드는 몇 가지 사항을 개선합니다.

item.audit.auditflags는 플래그를 설정하여 SharePoint 레지스터를 구성하는 이벤트를 구성하는 데 사용됩니다. 그 줄은 코드에 아무 것도 없습니다.

이벤트보기 만 쿼리하는 것은 spauditquery.addeventRestriction (spauditeventType.View)을 사용합니다. 그런 다음 확인할 부분이 필요하지 않습니다.보기 이벤트이거나 아닙니다.

private List<Int32> GetWhoHasRead()
{
    List<Int32> returnList = new List<Int32>();
    //run in elevated priveleges
    SPSecurity.RunWithElevatedPrivileges(delegate()
    {
        using (SPSite rootSite = new SPSite(SPContext.Current.Site.Url))
        {
            using (SPWeb rootWeb = rootSite.OpenWeb())
            {
                //get the library
                SPList testList = rootWeb.Lists["LibraryOne"];
                //get the document
                SPListItem item = testList.Items[0];

                SPAuditQuery query = new SPAuditQuery(rootSite);
                query.RestrictToListItem(item);
                //we want to know who viewed the document
                query.AddEventRestriction(SPAuditEventType.View);
                SPAuditEntryCollection entries = rootSite.Audit.GetEntries(query);

                //loopy loop
                foreach (SPAuditEntry entry in entries)
                {
                    //show who has read the document
                    SPUser readUser = rootWeb.AllUsers.GetByID(entry.UserId);
                    txtRead.Text += readUser.LoginName + "\r\n";
                    returnList.Add(readUser.ID);
                 }

              }
         }
     });
     return returnList;
 }
.

The semantic web relies on content providers taking the effort to properly annotate everything to make it machine-readable.

That is too much work for most people that are not librarians.

The real killer app seems to be something that can derive semantics from unstructured content without special markup. Look at the amazing job Google for example has done with its search-engine.

In order for a system to work, it must not place a heavy burden on its users.

A middle ground are tagging systems like here on Stackoverflow. They work pretty well, even though the tags are completely adhoc and in parts inconsistent.

Deep semantic support is interesting for a number of places where powerful queries are needed. An example was in a project I was working on recently where the service that was used for looking up where to dispatch workload to was semantically-based. RDF/SPARQL itself is interesting because it gives you pretty rich querying right off the bat, but when you add an OWL ontology in it gets even better as it means that you can answer richer queries (i.e., that ask questions closer to what the user – and their employer – really wants) while allowing the service providers to express what they're offering more clearly too. Which isn't to say that it means everyone telling everyone everything, not at all. Instead, we had parties describing what services were provided and not what configuration they were using to provision those services. And it was all empowered by the use of semantic technologies pervasively throughout the information systems.

Currently I'm working on Taverna which is now using RDF to provide a souped-up logging system; in particular, users (especially scientists) can search through the wealth of information recorded much more easily than if they had to just grep through a huge text file. After all, it would be a little absurd if you had to use text mining to find out what really happened in your text mining workflow...

I was impressed with AceWiki. It's one of several wiki projects out there that's attempting to semantically organize the information people enter.

It's still a work in progress, and has some limitations. For example, it only allows people to enter sentences that conform to a subset of English vocabulary and grammar.

However, I hope these take off. There's such a huge amount of free data out there (e.g. Wikipedia) and we can't create programs to reason over it because the content is too noisy.

Protege is good for developing ontologies.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top