All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone, I have single source that needs to be monitored which existed in 10 different servers. Am planning to assign one sourcetype for every two  servers. Can some one tell how i can achiev... See more...
Hello Everyone, I have single source that needs to be monitored which existed in 10 different servers. Am planning to assign one sourcetype for every two  servers. Can some one tell how i can achieve this? Please let me know if you need more information. Thanks in advance.
I have noticed Splunk 7.2.9.1 Universal forwarder on SUSE Linux12.4 is not communicating to deployment server and forwarding logs to indexer after certain period of time. "splunkd" process appears to... See more...
I have noticed Splunk 7.2.9.1 Universal forwarder on SUSE Linux12.4 is not communicating to deployment server and forwarding logs to indexer after certain period of time. "splunkd" process appears to be running while this issue persists. I have to restart UFW for it to resume communication to deployment and forward logs. But this will again stop communication after certain period of time. I cannot see any specific logs in splunkd.log while this issue occurs. However, i noticed below message from watchdog.log 06-16-2020 11:51:09.055 +0200 ERROR Watchdog - No response received from IMonitoredThread=0x7f24365fdcd0 within 8000 ms. Looks like thread name='Shutdown' is busy !? Starting to trace with 8000 ms interval. Can somebody help to understand what is causing this issue.
Hi,  I have a data input (Directory Monitor) for /opt/splunk/data/test Everyday a new csv file is copy pasted in this directory, and Splunk would start indexing them.  However all rows in th... See more...
Hi,  I have a data input (Directory Monitor) for /opt/splunk/data/test Everyday a new csv file is copy pasted in this directory, and Splunk would start indexing them.  However all rows in this csv file are indexed with different timestamp (_time) values. Eg the file has 3382 events indexed, but doing a      index=caseload host=XXXX source="/opt/splunk/data/test/testfile.csv" | stats count by _time     would yield something like _time count 2015-04-17 04:56:49 22 2016-01-08 19:51:49 33 2016-01-18 12:20:09 11 2016-02-07 21:15:09 18   shouldn't it be all current time which is "2020-06-17 10:10:10" for instance, instead of various different timestamps, I'm thinking it is trying to find some value per row that represents a timestamp and parse it, but I don't even see any "2015-04-17" in those 22 rows.  How do I make all the Directory Monitors to index each event using current timestamp? 
i am running below query to get total count by date_mday. search query | eval ver=substr(av,1,4) | stats count(ver) by date_mday and getting results for total count by month day.  date_mday ... See more...
i am running below query to get total count by date_mday. search query | eval ver=substr(av,1,4) | stats count(ver) by date_mday and getting results for total count by month day.  date_mday count 1 23 2 25 3 35 4 21   However, i want the results as ver count and total count - something like date_mday ver1234 ver2345 ver3456 ver4567 total Count 1 10 2 0 11 23 2 9 5 2 9 25 3 11 7 4 13 35 4 8 0 2 11 21   Since eval (eval ver=substr(av,1,4)) is dynamically populating the values to ver - I can't use | stats count(eval()) function. Please help me out.
When I run this search in the Web UI I get the correct results.  When it is run in a python script the "count(eval(RequestTime<2.00)) as PlaybackNumSuccessful" returns 0 when it should not.     se... See more...
When I run this search in the Web UI I get the correct results.  When it is run in a python script the "count(eval(RequestTime<2.00)) as PlaybackNumSuccessful" returns 0 when it should not.     search index=cdvr host=* AND source="/var/log/nginx/access.log" AND sourcetype="gemini-ecdn-nginx-access" | rex field=_raw ".*?\t.*?\t.*?\t.*?\t(?<Method>\w+)\s/(?<URI>.+?)\sHTTP.+?\t.*?\t(?<Status>.+?)\t.*?\t.*?\t.*?\t.*?\s.*?\t.*?\t(?<host_header>.+?)\t" | rex field=URI "(?<RecordingID>.*)\.(?<resource>.*)?\?.*" | dedup RecordingID | search Method=GET resource="m3u8" | stats count(eval(RequestTime<2.00)) as PlaybackNumSuccessful count(eval(RecordingID)) as PlaybackNumTotal | eval PlaybackNumFailed=(PlaybackNumTotal-PlaybackNumSuccessful) | eval SuccessPer = (PlaybackNumSuccessful/PlaybackNumTotal)*100 | eval PlaybackLatencyLessThan2SecSuccessRate=round(SuccessPer, 3)."%" | fields PlaybackNumTotal PlaybackNumFailed PlaybackLatencyLessThan2SecSuccessRate     Any ideas why?
I have this data _time EventCode Message 2020-06-16T19:48:53+00:00 4136 Too late now 2020-06-16T19:49:53+00:00 1234 I don't know 2020-06-16T19:50:53+00:00 3456 Too busy, ask agai... See more...
I have this data _time EventCode Message 2020-06-16T19:48:53+00:00 4136 Too late now 2020-06-16T19:49:53+00:00 1234 I don't know 2020-06-16T19:50:53+00:00 3456 Too busy, ask again later 2020-06-16T19:51:53+00:00 1256 Everything is happening at once 2020-06-16T19:51:53+00:00 1257 And right now, as well   Notice that the last 2 events have the same timestamp. I untable the events using this syntax: ...| untable _time FieldName FieldValue The results appear as this: _time FieldName FieldValue 2020-06-16 12:51:53 EventCode 1257 2020-06-16 12:51:53 Message And right now, as well 2020-06-16 12:51:53 EventCode 1256 2020-06-16 12:51:53 Message Everything is happening at once 2020-06-16 12:50:53 EventCode 3456 2020-06-16 12:50:53 Message Too busy, ask again later 2020-06-16 12:49:53 EventCode 1234 2020-06-16 12:49:53 Message I don't know 2020-06-16 12:48:53 EventCode 4136 2020-06-16 12:48:53 Message Too late now   When I reverse the untable by using the xyseries command, the most recent event gets dropped: ...| xyseries _time FieldName FieldValue _time EventCode Message 2020-06-16 12:48:53 4136 Too late now 2020-06-16 12:49:53 1234 I don't know 2020-06-16 12:50:53 3456 Too busy, ask again later 2020-06-16 12:51:53 1256 Everything is happening at once   The event with the EventCode 1257 and Message "And right now, as well" is missing. It's like the xyseries command performs a dedup on the _time field. Is this a known issue? Some suggested that I avoid using _time in the untable and xyseries command, but instead do this: ... | streamstats count as recno | untable recno FieldName FieldValue | xyseries recno FieldName FieldValue Which does return all 5 events.
In answers.splunk.com, there was an rss feed for whenever anyone posted a new question. When someone posts a question, I want to know as often as I have set the RSS feed reader to poll. So I could p... See more...
In answers.splunk.com, there was an rss feed for whenever anyone posted a new question. When someone posts a question, I want to know as often as I have set the RSS feed reader to poll. So I could poll on demand (or once a day or whatever) and find the new posts, without having to figure out how to navigate into each section of this site, figure out if I saw the message or not, and end up missing questions I could help on. I found a way to get all posts using Subscriptions within community.splunk.com, but that's not going to work for me. Is there a way to replicate the old RSS functionality of answers.splunk.com, in community.splunk.com?
HI All, Long story short - I'm looking to monitor a remote directory for changes/new files/changes to files and send this information to Splunk. To re-emphasize, due to the nature of these files, I ... See more...
HI All, Long story short - I'm looking to monitor a remote directory for changes/new files/changes to files and send this information to Splunk. To re-emphasize, due to the nature of these files, I do NOT want to ingest the files themselves into Splunk. Metadata like, size, paths, owners, changes, etc. is what I am looking for. I have discovered and set up Luke Murphey's "File/Directory Input" App - https://splunkbase.splunk.com/app/2776 However - After configuration, I'm not seeing anything come into Splunk... M example path within this app on my Splunk Server (say 10.10.10.10) is set to something like this for my remote server directory: 10.10.10.20:/directory/to/watch/ Is this app capable of doing that remotely? Should this path be something like user@ip:/path/to/folder ? Wouldn't I need ssh keys of sorts to do this?   If this app isn't the solution... Is a Universal Forwarder able to be configured to do this monitoring and forward metadata without forwarding the files themselves?   Thanks in advance for any help.
Hey all. So there is this thing that I periodically trawl the web for and never seem to find any results. It's a small thing but a huge bug bear. I really wanted all of our pagerduty alerts usi... See more...
Hey all. So there is this thing that I periodically trawl the web for and never seem to find any results. It's a small thing but a huge bug bear. I really wanted all of our pagerduty alerts using the app in splunk to autoresolve when the trigger had cleared. It looks like I may have come across a possible solution, and I wanted to see if anyone else has done this or found a better way. And to share in case I can help someone else. Today I finally managed to get this working, and the trick was a new feature in Pagerduty called Eventrules. Essentially you can create a an EventRuleset which will get you a new integration URL and key, and allow you to apply rules to non-standard fields in the payload. An example ruleset based on the following result payload would look like the following" "result": { "routing_key":"your eventruleset routing key", "dedup_key":"yourunique key, maybe the alert name", "event_action":"trigger or resolve", "severity":"critical or warning etc", "summary":"some high level info", "source":"usually the machine with a problem" } Your event rules then may look something like the below: if result.event_action=resolve then resolve the incident using the dedup key if result.event_action=trigger and result.severity=critical then severity is critical and raise new alert etc Now this does raise a small problem in that if your alert raises no rows then you won't be able to send the resolve. The below is extremely simple but does the job as an example by adding a stats count. The only caveat is that if you want an alert per event, you need to do some more work. index=_internal ERROR | stats count as event_count | eval dedup_key="ddddd" | eval severity="warning" | eval event_action=case(event_count>0,"trigger",1=1,"resolve") | eval summary="A summary of this event" | eval source="servera.example.com" | eval routing_key="XXXXXXXXX" | table dedup_key,severity,event_action, summary, source, routing_key All that's required after this is to update your integration url to use the eventrules api and key. This can however be spammy, because an api call will be made every time the alert runs. To get past that I created a KVStore called state_event and do an outputlookup to it each time the event runs. The fields i record are _key=dedup_key, event_action, date_last_changed and date_last_run. In the pagerduty alert properties you can change the alert trigger to custom and set the search to "where event_action!=event_action_lookup". Now it will only fire if this runs event_action(trigger or resolve) is different from the last. This will reduce noise and allow for some stateful tracking of each alert if so desired. Below is an example of the code i put after the table * example above to handle the above logic: eval _key=dedup_key | eval date_last_run=now() | join type=left [| inputlookup state_alert | rename event_action AS event_action_lookup | rename date_last_change AS date_last_change_lookup | fields event_action_lookup, date_last_change_lookup] | eval date_last_change=case(event_action!=event_action_lookup, now(),1=1,date_last_change_lookup) | outputlookup state_alert append=true key_field=_key Hopefully someone finds this useful, can improve on it, or show me a better way
I have a json with the following structure:   { "version":"v0.2", "prints":{ "urls":[ { "response_time":256, "uri":{ "bool":false, ... See more...
I have a json with the following structure:   { "version":"v0.2", "prints":{ "urls":[ { "response_time":256, "uri":{ "bool":false, "name":"abc" }, "Time":{ "total":52, "db":11 } }, { "response_time":578, "uri":{ "bool":false, "name":"xyz" }, "Time":{ "total":78, "db":13 } } ] } }   I've to create a table with columns : _time, rv, av, wm, an, et, uri_name, response_time, db_time, total_time here is my query that I'm trying to optimize by removing multiple spaths   basic search rv=*, av=*, wm=*, an=*, et=* | spath input=data path=prints.urls{} output=urls | spath input=urls path=response_time output=response_time | spath input=urls path=uri.name output=uri_name | spath input=urls path=Time.db output=db_time | spath input=urls path=Time.total output=total_time | table _time, rv, av, wm, an, et, uri_name, response_time, db_time, total_time   I can't help but think there would be a more optimized way to get the table.
Hello All, (Environment Splunk Cloud V8.x) When we use the Splunk Mobile app  we are unable to view any data from any dashboards we use. We do get a server error or no data available (doesn't matte... See more...
Hello All, (Environment Splunk Cloud V8.x) When we use the Splunk Mobile app  we are unable to view any data from any dashboards we use. We do get a server error or no data available (doesn't matter the search time).  This following article describes an issue with limitations: https://community.splunk.com/t5/All-Apps-and-Add-ons/Do-Splunk-Mobile-app-have-some-Visualization-Limitations/td-p/471164 Is this the issue we are having?  If so, what is the purpose of this app if Splunk has not fixed this yet (the forum post is from 10/20/2019)?  This is completely useless for us. I don't get it. Can any splunkers out there help me understand if there is an improvement for this app in the future? TIA, Bob
Hey Splunkers Novice question. I work in a windows enviro. Anybody have a good metric for host network performance????? I was using (TA Windows) something along the lines of bytes_total/sec * 8 *... See more...
Hey Splunkers Novice question. I work in a windows enviro. Anybody have a good metric for host network performance????? I was using (TA Windows) something along the lines of bytes_total/sec * 8 * 100 / bandwidth ...(essentially giving me bandwidth as a percentage) . Was also using bytes-in and bytes _out. However, none have proven to be very useful for indicating actual host network performance issues. Was thinking about using web pings to calculate latency???...anybody have any ideas?....Anybody have a query that indicates an increase in dropped packets or an increase in network errors that they have implemented and find useful. Any help is much appreciated. Thanks!
Has anyone integrated Prisma Cloud into Splunk Enterprise on AWS (either via SQS or API Gateway + Lambda + HEC) to view alert notifications from Prisma Cloud in Splunk? https://docs.paloaltonetworks... See more...
Has anyone integrated Prisma Cloud into Splunk Enterprise on AWS (either via SQS or API Gateway + Lambda + HEC) to view alert notifications from Prisma Cloud in Splunk? https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/configure-external-integrations-on-prisma-cloud/integrate-prisma-cloud-with-amazon-sqs.html https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/configure-external-integrations-on-prisma-cloud/integrate-prisma-cloud-with-splunk    
Hello! I am just starting to work with data models (and eval expressions) and I am a bit stuck.  I have a data model titled "Intrusion" with has 2 diff indexes A & B. Index A events includes a field ... See more...
Hello! I am just starting to work with data models (and eval expressions) and I am a bit stuck.  I have a data model titled "Intrusion" with has 2 diff indexes A & B. Index A events includes a field called "Signature" in the events, but Index B does not. Index B has a field titled "msg" which is similar to the signature field, so I want to add the msg field to the signature field in the data model.  Currently the data model has "signature" as an eval expression field with the below expression and field settings.  How can I edit the eval expression so that the "msg" field from Index B shows up as the "signature" field in the data model? Or do I need to add a new field/eval expression completely?  Thank you! Eval Expression: `if(isnull(signature) OR signature="","unknown",signature)` Field Name: signature Display Name: signature Type: string Flags: optional
I have a single instance small splunk system.  I'm receiving data for a handful of apps on this system.  I have data in 1 index on the system that I want to also send to an external syslog server (wh... See more...
I have a single instance small splunk system.  I'm receiving data for a handful of apps on this system.  I have data in 1 index on the system that I want to also send to an external syslog server (while keeping the data in Splunk).   I have read https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd#Syslog_data but that doesn't seem to indicate how to filter out a single index, and it looks like a heavy forwarder is required?  Can this work with a single-instance setup?
Could it be? - there is no audit log (tied to a user) when an alert gets modified and saved? I really looked hard and I'm stuck trying to use Rest APIs and compare the rule before/after.
Hey everyone. I am a newbie to splunk and i am stuck at this problem. So i have a column chart which shows data for a time range between two dates ex. 6/9/2020-6/23/2020 like in the pic below.  ... See more...
Hey everyone. I am a newbie to splunk and i am stuck at this problem. So i have a column chart which shows data for a time range between two dates ex. 6/9/2020-6/23/2020 like in the pic below.   So the query i used for this is  Index=main sourcetype = timeline | eval dates= beginning_date."-".ending_date | stats count by dates, target | xyseries dates target count  So now what i want is...if i append another search to this which contains data on 6/15/2020, It is treating 6/15/2020 as a seperate date. Like this   In the above image i want the line chart on date 2020-06-15 to go on to top of the column chart as that date lies in between 2020-06-09   -   2020-06-23(changed format of time).i want to overlay line chart on top of column chart.  Here these are the fields i used beginning date.              Ending date 6/9/2020.                         6/23/2020 And i want to create the query with these fields and not _time. Any help is appreciated and if there is anything u didn't understand in my question plz let me know. Thanks a lot.
For Splunk Cloud, I would like to enable user login to leverage LDAP to our Office365 but I am struggling to find the right docs. I find them for API connections but not Office365 LDAP login to Splun... See more...
For Splunk Cloud, I would like to enable user login to leverage LDAP to our Office365 but I am struggling to find the right docs. I find them for API connections but not Office365 LDAP login to Splunk cloud.
I have the following query below, I need to generate a third column or generate an alarm when the values ​​generated are 80% higher than the last 07 days earliest=-7d@d index=txt | eval ELAPTIME = ... See more...
I have the following query below, I need to generate a third column or generate an alarm when the values ​​generated are 80% higher than the last 07 days earliest=-7d@d index=txt | eval ELAPTIME = round(ELAPTIME / 100,2)/60 | timechart eval(round(avg(ELAPTIME),2)) as "Job Execution" Result Splunk: _time Job Execution 2020-06-09 55.00 2020-06-10 51.74 2020-06-11 55.74 2020-06-12 70.00 2020-06-13 90.00 2020-06-14 85.00 2020-06-15 150   Note that on June 15, the job had a problem because of his time that was 80% above the other days.