Idea Number Three

...But thoughts about my research would not let me go. I had an idea: how can algorithms for processing highly fuzzy data be implemented in existing information systems?
Suppose we have a database where precise data is collected somehow (the collection method is not important right now). The data is structured and quite crisp, represented as time series. The question is: how can we process this data quickly using the principles of fuzzy logic?
Imagine the database contains the following tables:
temperatures (id, value, date, place) — a time series of ambient temperatures: identifier, temperature value, date, and place.
yields (id, value, date, place, culture) — harvested-yield data extended with crop information.
This is a fairly standard representation of data—precise and understandable. But here is the question: do we actually need that level of precision for comparison and analysis? I believe we do not. We do not need to compare 21.3°C to 126.723 tons of grain in order to identify patterns. It is enough to know that 21.3°C is a sufficient average temperature during the growing season to achieve a good harvest of 126.723 tons.
Notice that I used adjectives: "sufficient" for temperature and "good" for yield. Those are no longer exact values, but their human, subjective interpretation—which is exactly what fuzzy data represents. In a particular context (for example, wheat cultivation), such interpretations can be far more useful than absolute numbers.
Tomorrow has come. Let’s continue.
The idea is to add several objects to the database that automatically transform precise data into fuzzy representations using rules defined by context.
Let’s start with a contexts table (id, name, description), which defines the context of analysis (for example, crop production, wheat crop). Then, for the temperatures table, let’s create a temperaturesrules table (id, contextid, expression, parameters).