Installation

What is the best way to spoof run-anywhere fake data for a question?

woodcock
Esteemed Legend

Many people ask questions here that are tricky enough that the only way to get an answer that works is to play around with the data quite a bit. In order to do this, we have to fake data first. For the following data set, what is the best way to do it?

host   source  count name
host1  sourceA 33    Inky
host2  sourceA 23    Pinky
host3  sourceB -2    Blinky
host4           5    Clyde

What about for multi-value fields?

Tags (2)
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

I frequently do something like this:

| stats count | eval field = "val1 val2 val3" | makemv field | mvexpand field
| eval mv = "mv1 mv2 mv3" | makemv mv
| streamstats count | eval val = random()%100
| eval _time = now() + random()%100 | sort - _time

Gives you five things:

  • three events to play with
  • single- and multi-value fields
  • a count or id
  • numerical data
  • timestamps

Pick what you need and re-assemble for each sample data task.

View solution in original post

woodcock
Esteemed Legend

There is a new command for this that can be used instead of |noop|stats count AS ..., it is |makeresults:

https://docs.splunk.com/Documentation/Splunk/6.5.2/SearchReference/Makeresults

0 Karma

woodcock
Esteemed Legend

I typically do something very similar to @martin_mueller like this:

| metadata type=hosts | head 1 | eval name="Inky Pinky Blinky Clyde" makemv name | mvexpand name

This gives me my 4 events and it does it quickly because nothing is faster than "head 1" (I think).
Now I can set my other fields' values with case statements like this:

| eval host=case(name="Inky", host1, name="Pinky", host2, name="Blinky", host3, name="Clyde", host4)
0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Technically, this is much faster than | head 1:

| noop | stats count | ...

Can't get faster than not even loading one event...

woodcock
Esteemed Legend

Accept that you don't need the noop part:

| stats count | ....

acharlieh
Influencer

Need? Maybe not, but talking in terms of pure speed.... It's been a while since I did this experiment, but use the job inspector and compare performance metrics of | stats count and |noop | stats count if I remember correctly, in a distributed environment the former is actually distributing a search (that happens to return nothing from anywhere), but the latter literally does nothing and counts it. So the former is impacted by connectivity to indexers, and the latter is not. 🙂

woodcock
Esteemed Legend

Brilliant; I would not have even thought to check this!

martin_mueller
SplunkTrust
SplunkTrust

I frequently do something like this:

| stats count | eval field = "val1 val2 val3" | makemv field | mvexpand field
| eval mv = "mv1 mv2 mv3" | makemv mv
| streamstats count | eval val = random()%100
| eval _time = now() + random()%100 | sort - _time

Gives you five things:

  • three events to play with
  • single- and multi-value fields
  • a count or id
  • numerical data
  • timestamps

Pick what you need and re-assemble for each sample data task.

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

Small update, since 6.3 there is a dedicated command to make artificial results: http://docs.splunk.com/Documentation/Splunk/6.3.1/SearchReference/Makeresults

acharlieh
Influencer

"Best" is rather subjective, and varies widely with the question being asked. However I try to use the most straightforward method as needed for a particular problem. In your example case As you gave sample data in a tabular form I would use |noop|stats count to get a single result record, followed by eval to paste in your table as _raw then use multikv to split into records and fields. As I previously pointed out in a comment.

Multivalve fields would handle no differently, just pick a delimiter that doesn't appear elsewhere and serialize multivalued fields with it and after multikv, use eval with the split function to make your mv field. (I used a similar principle in the original part of this answer encoding multivalued fields as single valued fields with *** as a delimiter)

When data isn't provided using gentimes can generate a bunch of time slots quickly. A few evals making use of random() with appropriate math to constrain ranges and you have magically generated massive sets of data. Using summary indexing commands and temporary indexes then let's you keep that generated set and see what manipulation scan be done.

Another method I've used was to take a string, split it into a multivalued field, mvexpand and then used auto extraction. in this gist. I don't remember if I generated or was given that data.

But in short, there is no best way, use the many tools of Splunk and do whatever is easiest for the problem at hand.

esix_splunk
Splunk Employee
Splunk Employee

A combination of _internal, eventgen, and the Windows or NIX TA.

These can cover most all general questions and functionality of Splunk Search related questions. There almost examples of almost all kinds of datasets included in these, and pouring over these is a great way to learn Splunk.

Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!