All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well, the data is little simplified but the structure is same. Actually , I am trying to compare not only prod_qual but other values as well.   Now it is working I am comparing both prod_qual and... See more...
Well, the data is little simplified but the structure is same. Actually , I am trying to compare not only prod_qual but other values as well.   Now it is working I am comparing both prod_qual and prod_rate hence I have modified your query as below,   index = data1 | eval grain_name = json_array_to_mv(json_keys(data1)) |mvexpand grain_name |eval data = json_extract(data1, grain_name), rate= json_extract(data, "prod_rate"), qual = json_extract(data, "prod_qual") |table grain_name, qual | append [ search index=data2 | eval grain_name = json_array_to_mv(json_keys(data2)) | mvexpand grain_name | eval data2 = json_extract(data2, grain_name),rate2= json_extract(data2, "prod_rate") qual2 = json_extract(data2, "prod_qual")] | stats values(qual) as qual values(qual2) as qual2 values(rate) as rate values(rate2) as rate2 by grain_name |eval diff = if(match (qual, qual2), "Same", "NotSame") |eval diff2 = if(diff == "Same", if(rate==rate2, "Same", "NotSame"), "NotSame") |table grain_name, qual, rate, diff2    
nope, that did not work too
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table ... See more...
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table which has the list of all possible values that can occur in a given time period. The lookup table is of the below format Time Value Monday 14: [1300 - 1400] 412790 AA Monday 14: [1300 - 1400]   114556 BN Monday 15: [1400 - 1500] 243764 TY Based on this, in the live count , for the given time period ( let's take  Monday 14: [1300 - 1400] as an example ), if I do a  stats count as Value by Time and I don't get "114556 BN" as one of the values, an alert is to be generated. Where I'm stuck with is matching the time with the values. If I use inputlookup first, I am not able to pass the time from Master Time picker  which will not allow me to check for specific time frame ( in this case an hour ). If I use the index search first, I am able to match the time against the lookup by using | join type=left but I am not able to find the missing values which are not there in the live count but present in the lookup. Would appreciate if I could get some advice on how to go about this. Thanks in advance!
@PickleRick  Okay thanks but i didn't find any way to avoid duplication on UF itself earlier, I was thinking to do it other way, what if i enable Suppress results triggering the alert and set it t... See more...
@PickleRick  Okay thanks but i didn't find any way to avoid duplication on UF itself earlier, I was thinking to do it other way, what if i enable Suppress results triggering the alert and set it to 24 hours, i think each unique id  event will alert once within that period.  below is the query, index=pro sourcetype=logs Remark="xyz" | dedup ID
There are docs like https://docs.splunk.com/Documentation/Forwarder/9.2.0/Forwarder/Installleastprivileged Also the recent versions of Windows UF create a user with a relatively limited set of permi... See more...
There are docs like https://docs.splunk.com/Documentation/Forwarder/9.2.0/Forwarder/Installleastprivileged Also the recent versions of Windows UF create a user with a relatively limited set of permissions. But It's a setup for a limited set of permissions for a typical use case. And as a fairly generic setup it can be both too "closed" (for example, you need a domain user with proper perrmisions to read remote shares) as well as too "open" (you don't need access to Event Log if you're not planning to read from it.
That's what @isoutamo is talking about. This is what streamstats does. With properly set window (either in terms of number of events or time) it can calculate stats over moving window.
So firstly you should get your data ingestion process right. Events should not be ingested multiple times. Since we don't know where this data comes from, we can't offer much advice here. You can ope... See more...
So firstly you should get your data ingestion process right. Events should not be ingested multiple times. Since we don't know where this data comes from, we can't offer much advice here. You can open another thread in the "Getting data in" section about this problem.
Unfortunately, that's not it.  Let me try to clarify Right now, I get results with one value per day so if I pick "last 7 days" I only see 7 data points which is much too coarse.  I'd prefer to h... See more...
Unfortunately, that's not it.  Let me try to clarify Right now, I get results with one value per day so if I pick "last 7 days" I only see 7 data points which is much too coarse.  I'd prefer to have the normal "100 bins" or points of data, with each one the count of events for the preceding 24h from when that data point/bin is in time.  The end result would be a much smoother chart, basically showing the count value my alert is checking.  It's looking to me that as soon as I pick "last 7 days", I'm in the realm of days and I cannot plot with more granularity.
Is there a specific set of permissions for splunk universal forwarders and its user account? Maybe a document that points to this?
Hi, @martynoconnor  solution worked for me.  Best Regards
Small fix | eval MyNewField=Host+"@"+Domain  
Hi I'm not sure if I understood your question correctly, but maybe you could get this done with streamstats? You could use it first to calculate that sliding count for previous 24h and then use tim... See more...
Hi I'm not sure if I understood your question correctly, but maybe you could get this done with streamstats? You could use it first to calculate that sliding count for previous 24h and then use timechart with values/max to show those into your chart. See https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Streamstats use it with  time_window=1d ... | streamstats time_window=1d count as dailyCount .... | timechart max(dailyCount) as dailyCount max(threshold) as threshold ... Use span on timechart and any other aggregate values which you maybe need. r. Ismo 
Weird. Could you check if there are any other errors? If it's just the MemoryError then you could try measuring the memory usage at the time of the upgrade in case anything explodes in memory usage d... See more...
Weird. Could you check if there are any other errors? If it's just the MemoryError then you could try measuring the memory usage at the time of the upgrade in case anything explodes in memory usage during the upgrade.
How about | eval MyNewField=Host+Domain
Trying to uninstall Splunk Enterprise 7.0.1.0 from Windows 10.  I get a message from the uninstall process to "Insert the 'Splunk Enterprise' disk and click OK." The issue is I don't have a "Splunk ... See more...
Trying to uninstall Splunk Enterprise 7.0.1.0 from Windows 10.  I get a message from the uninstall process to "Insert the 'Splunk Enterprise' disk and click OK." The issue is I don't have a "Splunk Enterprise" disk.  Nor is there an msi file to use.   Please advise.
I currently have two different fields Host                     Domain F32432KL34    domain.com I wish to combine these into one field that shows the following: F32432KL34@domain.com How would yo... See more...
I currently have two different fields Host                     Domain F32432KL34    domain.com I wish to combine these into one field that shows the following: F32432KL34@domain.com How would you suggest going about this?
I tried sending a sample query via API:   search index=* earliest=-20m | eval bytes=10+10 | head 10 | table _time bytes   And I got the same Eval error as you. When I URL-encode the search query:... See more...
I tried sending a sample query via API:   search index=* earliest=-20m | eval bytes=10+10 | head 10 | table _time bytes   And I got the same Eval error as you. When I URL-encode the search query: search%20index%3D%2A%20earliest%3D-20m%20%7C%20eval%20bytes%3D10%2B10%20%7C%20head%2010%20%7C%20table%20_time%20bytes   it returns results. Could you try URL-encoding your search query?
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. T... See more...
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. This is intended to show the evaluations an alert is making every x minutes where it triggers if the count is greater than some threshold value.  I'm adding that threshold to the chart as a static line so we should be able to see the points at which the alert could have triggered. I have the following right now, but it's only showing one data point per day when I would prefer the normal 100 bins   ... | timechart span=1d count | eval threshold=1000   Hope that's not too poorly worded
This is kind of sed-y but it should work: (assuming your automatic kv field extraction is working on your json event)   | spath input=_raw path=details output=hold | rex field=hold mode=sed "s/({\s... See more...
This is kind of sed-y but it should work: (assuming your automatic kv field extraction is working on your json event)   | spath input=_raw path=details output=hold | rex field=hold mode=sed "s/({\s*|\s*}|,\s*)//g" | makemv hold delim="\"\"" | mvexpand hold | rex field=hold "(?<key>[^,\s\"]*)\"\s:\s\"(?<value>[^,\s\"]*)" max_match=0 | table orderNum key value orderLocation  
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve t... See more...
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve this by modifying the inputs.conf file of UF after the container is started.   [monitor::///app/logs] index = logs_data   But, after making this change, I have to RESTART my UF container.  I want to ensure when my UF starts, it should send the data to "logs_data" index by default (assuming this index is present in the Splunk Indexer) I tried overriding the default inputs.conf by mounting the locally created inputs.conf to its location Below is the snippet of how I am creating the UF container   splunkforwarder: image: splunk/universalforwarder:8.0 hostname: splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license --answer-yes - SPLUNK_STANDALONE_URL=splunk:9997 - SPLUNK_ADD=monitor /app/logs - SPLUNK_PASSWORD=password restart: always depends_on: splunk: condition: service_healthy volumes: - ./inputs.conf:/opt/splunkforwarder/etc/system/local/inputs.conf   But, I am getting some weird error while container is trying to start.   An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf' fatal: [localhost]: FAILED! => { "changed": false } MSG: Unable to make /home/splunk/.ansible/tmp/ansible-moduletmp-1710787997.6605148-qhnktiip/tmpvjrugxb1 into to /opt/splunkforwarder/etc/system/local/inputs.conf, failed final rename from b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf': [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf'​   Looks like, some process is trying to access the inputs.conf while its getting overridden.  Can someone please help me solve this issue?   Thanks