CROC is an acronym that stands for "CROC: a Representational Ontology for Concepts".
Semantic Web has OWL, the Web Ontology Language, which may provide 'ontologies' (sets of classes). CROC aims to provide 'conceptuologies' (sets of concepts) for artificial agents in the Semantic Web.
Well, of course that depends on how you define them. We take the definition of Millikan, 2000.
Classes are groupings of entities on basis of common properties. For example, there may be classes "red things" and "non-red things". Classes may therefore be used for efficient knowledge representation.
Concepts are abilities to reidentify [Millikan, 2000]. Where classification is a logical, more technical process, identification may have various sources; for example, we can identify a friend by a written name, by her voice, by a portrait, by the sound of footsteps, by a style of handwriting.
No, for most things not. Most natural kinds are not classes, nor fuzzy classes [Millikan, 2000]. There are, for example, no common properties for dogs involved in our concept for dogs. "There are no properties that every dog has in common with every other dog." [Millikan, 2000] Instead, what holds the group together is that its instances are causally related. This causal relation may have the result that certain properties are probable to be shared among all instances. "There is a good explanation of why one is likely to be like the next." [Millikan, 2000]
First, there is the problem that different agents will have different class descriptions. If you treat a natural kind as a class, you have to pick a set of property restrictions. However, one agent will pick other property restrictions than other agents. For example, agent A may restrict dogs by that they have four pawns and a good nose, agent B may restrict dogs by that they bark. For agent A, it will be impossible to see a dog with three legs and defect nose (but able to bark) as a dog; for agent B, to see a dog that never barks as a dog. And that is not the end of the story: how to identify "to bark", "nose" and "pawn"?
Second, it may be very hard to pick a property. What is a common property for "to bark"? What is a common property for "nose" or "pawn"? If we push to define these things, we will head for abstractions, like "a nose is a olfactionary-type sense", depending on the identification of "olfactionary-type" and "sense", or "a nose is an entity that is able to smell", depending on the identification of "to smell". Again, that puts an extra burden for definition on intelligent agents, and does not really give more understanding nor consensus.
Third, such class definitions are context dependent. A dog that loses his forth leg suddenly stops being a dog, in the eyes of agent A.
You may like to read chapter 4 of my thesis for the answer of this question as well.
For a Semantic Web, of course: to understand represented content from the environment. The many services and agents should be able to communicate via Semantic Web technology.
Semantic Web based on 'ontologies' can represent knowledge as well using class symbols, but these are not identifiable for agents which have this 'ontology' not built in (the 'interoperability problem').
In the future, artificial agents may have abilities to identify from video, audio representations as well, but that is the future. CROC aims to provide artificial agents with abilities to identify from lexical representations.
My Master's thesis is exactly answering this question in many aspects: what kinds of concepts there are, what relations and properties of concepts are important, what kinds of lexical representations there are, and finally how concepts can be grounded in representations.