All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you run the alert manually does it find any data? If you want to find an event that is generated at 12:30, then that event will probably not be picked up until 12:45 when the alert runs on your c... See more...
If you run the alert manually does it find any data? If you want to find an event that is generated at 12:30, then that event will probably not be picked up until 12:45 when the alert runs on your cron schedule. Your time range is set to last 15 minutes, so depending on exactly WHEN your alert runs, you may miss events, because if the event occurs at 12:30 and is indexed by Splunk at 12:30:04 and your search ran at 12:30:02 then it will not find it. The next search which might run at 12:45:06 will also not find it as it only searches between 12:30:06 and 12:45:06 So please set your search to run with exact time specifiers with "snap to time" using @m   
Hi All - Pretty new to Splunk and having an issue sorting/parsing data from our syslog server. We have many rhel7 linux hosts all sending their logs to one server where they get aggregated. This work... See more...
Hi All - Pretty new to Splunk and having an issue sorting/parsing data from our syslog server. We have many rhel7 linux hosts all sending their logs to one server where they get aggregated. This works fine. I can go into /var/log/secure, messages, etc. and see entries from all the hosts we have. We are running a splunkforwarder on this host with the hopes that it would be forwarding all the data to splunk as it hits the this rhel7 log aggregator. We just have a single head/indexer, and if I run a query "index="*" I do get quite a bit of results, BUT it only shows 2 hosts, the splunk instance and the rhel7 system that we are aggregating the logs on. If I change the search to "index="*" hostname"  with the hostname being one of the rhel hosts, I can find the entries specific to that host. I hope this makes sense? So somehow I need to tell Splunk about these hosts so they are recognized as separate hosts. What can I do to make this work? Thank you all in advance!  
The table _time _raw and spath effectively reparse the JSON otherwise you have the extracted files from the ingest as well as the fields from the spath. Without seeing the actual events, I can't tel... See more...
The table _time _raw and spath effectively reparse the JSON otherwise you have the extracted files from the ingest as well as the fields from the spath. Without seeing the actual events, I can't tell what might be causing the disparity between the counts and number of lines. Perhaps there are extra blank lines, or new line characters.
I have installed Splunk forwarder 9.1.1 on a linux server, but the user and group splunk was unable to be created from the rpm installation. I thought that could have fixed the issue as to why i kept... See more...
I have installed Splunk forwarder 9.1.1 on a linux server, but the user and group splunk was unable to be created from the rpm installation. I thought that could have fixed the issue as to why i kept getting an inactive forward-server, but I ended up getting a new error. when i try to restart splunk forwarder, i get the following error: splunkd is not running. "failed splunkd.pid doesn't exist" and when i try to have splunk forwarder list the forward-server, I get the following error 3 times: 'tcp_conn_open_afux ossocket_connect failed with no such file or directory' it still lists my server as an inactive one despite having another splunk forwarder linux host properly connecting to splunk enterprise via ssl connection. I have also made sure that the listening port (9997) is listened to by splunk. its the same port used by the other linux host to forward logs to
@tedgett  I ended up taking the existing dashboard and making my own version with the corrected queries.
I checked complete json, don't see any duplicates. I see some solutions in the blog asking to switch from "INDEXED_EXTRACTIONS = JSON" to "KV_MODE = json". I'm not sure that will work in my case. ... See more...
I checked complete json, don't see any duplicates. I see some solutions in the blog asking to switch from "INDEXED_EXTRACTIONS = JSON" to "KV_MODE = json". I'm not sure that will work in my case. https://community.splunk.com/t5/Splunk-Search/INDEXED-EXTRACTIONS-JSON-limiting-multivalued-fields-to-10/td-p/279893
You possibly have duplicates/triplicates in your events.
Thanks one more time. Interestingly, your recent query is fetching only 77 values where as i have 182 values in json file. Is this splunk limitation?  
Try removing the other fields | table _time _raw | spath | untable _time name state | eval date=strftime(_time,"%F") | xyseries name date state
Hello, I have a use case where I have a bunch of email alerts that I need to determine the system name for. Examples,  lets say i have the alerts: 1. File system alert on AAA 2. File system aler... See more...
Hello, I have a use case where I have a bunch of email alerts that I need to determine the system name for. Examples,  lets say i have the alerts: 1. File system alert on AAA 2. File system alert on server servernameaaaendservername 3. File system alert on server BBB I have the list of these system names in a lookup table (Around 100 unique names), so adding 100 lines of field_name LIKE "%systemname1%","systemname1" doesn't seem efficient. Is there a way to use the conditional statement with the lookup table to match the statments? Trying to get the below output by using the system names found in the lookup table If systemname is found in the lookup table that matches on what is found in the alert, output systemname Alert Name || System Name File system alert on AAA || AAA File system alert on server servernameaaaendservername || AAA File system alert on server BBB || BBB
The Windows TA on the search heads is 8.6.0, and the Windows TA on the HF us 9.0.6. Here is the inputs.conf stanza for Security. [WinEventLog://Security] disabled = 0 start_from = oldest current... See more...
The Windows TA on the search heads is 8.6.0, and the Windows TA on the HF us 9.0.6. Here is the inputs.conf stanza for Security. [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" index = test_i renderXml=true  The events are stream processed, and come in as JSON.
Thanks Again! You're my saver Your query works. However, For some reason I see state twice. Also, i see source, host etc... being listed in the table  
It would help to know what you've tried so far and how those attempts failed you. One method that works well for me is the rex command.  This regex matches everything after "Junk_Message>" until the... See more...
It would help to know what you've tried so far and how those attempts failed you. One method that works well for me is the rex command.  This regex matches everything after "Junk_Message>" until the following "<" and puts it into a field called Junk_Message. | rex "Junk_Message>(?<Junk_Message>[^\<])"  
I need to extract a string from a message body,  and make a new field for it.   <Junk_Message> #body | Thing1 | Stuff2  | meh4 | so on 1 | extra stuff3 | Blah4 </Junk_Message> I just need the tex... See more...
I need to extract a string from a message body,  and make a new field for it.   <Junk_Message> #body | Thing1 | Stuff2  | meh4 | so on 1 | extra stuff3 | Blah4 </Junk_Message> I just need the text that start with #body and end with Blah4. To make things more fun everything after #body generates randomly.
Here is what I am attempting to write SPL to show.  I will have users logged into several hosts all using a web application.  I want to see the last (most recent) activity performed for each user log... See more...
Here is what I am attempting to write SPL to show.  I will have users logged into several hosts all using a web application.  I want to see the last (most recent) activity performed for each user logged in. Here is what I have so far:  index=anIndex sourcetype=aSourcetype | rex field=_raw "^(?:[^,\n]*,){2}(?P<aLoginID>[^,]+)" | rex field=_raw "^\w+\s+\d+_\w+_\w+\s+:\s+\w+\.\w+\.\w+\.\w+\.\w+\.\w+\.\w+\.\w+,(?P<anAction>\w+)" | search aLoginID!=null | stats max(_time) AS lastAttempt BY host aLoginID | eval aTime = strftime(lastAttempt, "%Y-%m-%d %H:%M:%S %p ") | sort -aTime | table host aLoginID aTime | rename host AS "Host", aLoginID AS "User ID", aTime AS "User Last Activity Time" I am getting my data as expected by host aLoginID but want to only see the most recent anAction ? When I add in my BY clause host aLoginID anAction I start seeing the userID repeated in my results as I would expect as each anAction "name" is different but I am only seeing one row for each anAction name. I think I am on the right 'path' but I want to only see 1 row for each user not 1 row for each userID & action ?
Assuming your timestamp is in _time and your events (as shown) are in _raw, try this | spath | untable _time name state | eval date=strftime(_time,"%F") | xyseries name date state
I am trying to generate three reports with stats. The first is where jedi and sith have matching columns. The third is where jedi and sith do not match. Example: index=jedi | table saber_color, J... See more...
I am trying to generate three reports with stats. The first is where jedi and sith have matching columns. The third is where jedi and sith do not match. Example: index=jedi | table saber_color, Jname, strengths index-=sith | table saber_color, Sname, strengths I need to list where Jname=Sname The third one is where the Jname!=Sname  The caveat is I cannot use the join for this query. Any good ideas?    
This is simple search, which give me this result. Result contains fields which contains "mobilePhoneNumber" OR "countryCode" OR "mobilePhoneNumber AND countryCode"   I want to return count (in ... See more...
This is simple search, which give me this result. Result contains fields which contains "mobilePhoneNumber" OR "countryCode" OR "mobilePhoneNumber AND countryCode"   I want to return count (in one line) of all fields which contains both, mobilePhoneNumber and countryCode ("mobilePhoneNumber AND countryCode").
Thanks again! Yes! Below json files are generated every day and I would like to show them in table format as below Source: Group01/1318/test.json Generated timestamp: 11/12 12:00 AM { "Portf... See more...
Thanks again! Yes! Below json files are generated every day and I would like to show them in table format as below Source: Group01/1318/test.json Generated timestamp: 11/12 12:00 AM { "Portfolio_Validate1":"skipped", "Portfolio_Validate2":"passed", "Portfolio_Validate3":"passed", "Portfolio_Validate4":"broken" } Source: Group01/1319/test.json Generated timestamp: 11/13 12:00 AM { "Portfolio_Validate1":"passed", "Portfolio_Validate2":"passed", "Portfolio_Validate3":"passed", "Portfolio_Validate4":"broken" } Source: Group01/1320/test.json Generated timestamp: 11/14 12:00 AM { "Portfolio_Validate1":"passed", "Portfolio_Validate2":"failed", "Portfolio_Validate3":"passed", "Portfolio_Validate4":"passed" }   11/14 12:00 AM 11/14 12:00 AM 11/12 12:00 AM Portfolio_Validate1 passed passed skipped Portfolio_Validate1 failed passed passed Portfolio_Validate1 passed passed passed Portfolio_Validate1 passed broken broken  
Hello,  I have a system log which contains different DNS error messages (in the 'Message' field) and I am looking for an easy way to provide a short, meaningful description for those messages, eithe... See more...
Hello,  I have a system log which contains different DNS error messages (in the 'Message' field) and I am looking for an easy way to provide a short, meaningful description for those messages, either by adding a new field representing each unique DNS error message, or by adding text to the Message field. Here's an example; one event contains the following :  Message="DNS name resolution failure (sos.epdg.epc.mnc720.mcc302.pub.3gppnetwork.org)" This error is related to WiFi calling, so I would like to associate a description, or tag to that specific message, e.g. "WiFi calling". Thoughts?