All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try https://github.com/whackyhack/Splunk-org-chart.  (Play with the dashboard to find who's the big boss:-)
Not totally clear what the eventstats is doing here.  It would help if you could illustrate the desired results from mock data.  Do you mean to produce two tables like these? 1. superhero archet... See more...
Not totally clear what the eventstats is doing here.  It would help if you could illustrate the desired results from mock data.  Do you mean to produce two tables like these? 1. superhero archetype id strengths superhero superman super strength, flight, and heat vision superhero batman exceptional martial arts skills, detective abilities, and psychic abilities 2. villan archetype id strengths villain joker cunning and unpredictable personality To do these, you can use   index=characters | spath path={} | mvexpand {} | spath input={} | fields id, strengths, archetype | where archetype="superhero" | stats values(*) as * by id   for superhero; for villan, use   index=characters ``` | spath path={} | mvexpand {} | spath input={} | fields id, strengths, archetype | where archetype="villan" | stats values(*) as * by id   Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw="[ { \"id\": \"superman\", \"strengths\": \"super strength, flight, and heat vision\", \"archetype\": \"superhero\" }, { \"id\": \"batman\", \"strengths\": \"exceptional martial arts skills, detective abilities, and psychic abilities\", \"archetype\": \"superhero\" }, { \"id\": \"joker\", \"strengths\": \"cunning and unpredictable personality\", \"archetype\": \"villain\" } ]" | spath ``` the above emulates index=characters ```    
@PickleRick  “Whenever I click on ‘Return to Splunk,’ it redirects to the Splunk login page. Instead, I want it to redirect to a custom URL. When users face login issues, a message will pop up, and w... See more...
@PickleRick  “Whenever I click on ‘Return to Splunk,’ it redirects to the Splunk login page. Instead, I want it to redirect to a custom URL. When users face login issues, a message will pop up, and when they click ‘Return to Splunk,’ they will be redirected to the custom URL.” How can I do this?
If you have identifier of each transaction such as transaction id, use stats to get the earliest and latest for e.g. your search |earliest(_time) as starttime,latest(_time) as endtime by transactio... See more...
If you have identifier of each transaction such as transaction id, use stats to get the earliest and latest for e.g. your search |earliest(_time) as starttime,latest(_time) as endtime by transactionID|eval duration=endtime-starttime  
@DATT , try using stats on those values | stats delim="," list(INTEL) as INTEL,list(WEIGHT) as WEIGHT | nomv INTEL | nomv WEIGHT Here is a run anywhere example . Add/remove your columns according... See more...
@DATT , try using stats on those values | stats delim="," list(INTEL) as INTEL,list(WEIGHT) as WEIGHT | nomv INTEL | nomv WEIGHT Here is a run anywhere example . Add/remove your columns according to the requirements | makeresults | fields - _time | eval INTEL="A B C D E" | makemv INTEL | mvexpand INTEL | streamstats count | eval WEIGHT=count | rename count as ID | makemv delim="," INTEL | rename comment as "Above is just data generation" | stats delim="," list(INTEL) as INTEL,list(WEIGHT) as WEIGHT | nomv INTEL | nomv WEIGHT
Thanks, i find the error, it was a silly mistake of mine.
Hello everybody, I'm working on a query that does the following: 1. Pull records, mvexpand on a field named INTEL. This is a multi-value field that could have anywhere from 1 to 11 different values... See more...
Hello everybody, I'm working on a query that does the following: 1. Pull records, mvexpand on a field named INTEL. This is a multi-value field that could have anywhere from 1 to 11 different values. 2. Once expanded, perform a lookup using INTEL to retrieve a field WEIGHT. A weight is assigned to each INTEL value, between 1 and 5. 3. After the lookup, collapse the split records back into one record.  At first glance I figured I could do `... | mvexpand | lookup | mvcombine | nomv` but since the records are no longer identical because both INTEL and WEIGHT are different, I don't think I can use mvcombine anymore. To Visually demonstrate the issue ID INTEL 12345 A, B, C, D   After mvexpand ID INTEL 12345 A 12345 B 12345 C 12345 D   After Lookup ID INTEL WEIGHT 123456 A 1 123456 B 2 123456 C 3 123456 D 4   Ultimately, I would like to get back to this ID INTEL WEIGHT 123456 A,B,C,D 1,2,3,4   Any tips?
I have a dataset to visualize my organization in Splunk. When I search for Org=CDO, I get all the direct reports under the CDO, which include positions like CSO and CIO. Under each of these positions... See more...
I have a dataset to visualize my organization in Splunk. When I search for Org=CDO, I get all the direct reports under the CDO, which include positions like CSO and CIO. Under each of these positions, there are many VPs, and under each VP, there are many directors. How can I retrieve the results for the entire hierarchy under the CDO using Splunk? We have a field named Org and another field name job_title When I search Org=CDO I get only direct reports of CDO, no other value in the raw event to extract. any help would be appreciated
Im trying to substract  the total number i have of alerts that send and email  from the total amount of alerts that are bookmarked in SSE.  The only examples I found on the community used either the ... See more...
Im trying to substract  the total number i have of alerts that send and email  from the total amount of alerts that are bookmarked in SSE.  The only examples I found on the community used either the same index, or sub-searches (neither worked in my scenario) My query for  the alerts is : | rest /services/saved/searches | search alert_type!="always" AND action.email.to="production@email.com" AND title!="*test*" | stats count(action.email.to) AS "Count" My query for bookmarks is:  | sseanalytics 'bookmark' | where bookmark_status="successfullyImplemented" | stats count(bookmark_status_display) AS "Bookmark Status" by bookmark_status_display
Hi  @ITWhisperer  Thanks for your response. I had not extracted any yet cause the logs are not yet in splunk but will be soon  Yes, the transaction ID are unique.  The below is what i got from cloud ... See more...
Hi  @ITWhisperer  Thanks for your response. I had not extracted any yet cause the logs are not yet in splunk but will be soon  Yes, the transaction ID are unique.  The below is what i got from cloud watch.   2024-08-12T10:04:16.962-04:00          (434-abc-345789-de456ght) Extended Request Id: cmtf1111111111111111= 2024-08-12T10:04:16.963-04:00          (434-abc-345789-de456ght) Verifying Usage Plan for request: AAAAAAAAAAAAAAAAAAAAAAAA 2024-08-12T10:04:16.964-04:00          (434-abc-345789-de456ght)  BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB 2024-08-12T10:04:16.964-04:00          (434-abc-345789-de456ght) AAAAAAAAAABBBBBBBBBBBBBCCCCCCCCCCCCCCCCCC 2024-08-12T10:04:16.964-04:00          (434-abc-345789-de456ght) Starting execution for request: 8hhhhh-cdcd-434444-8bbb-dedr44444 2024-08-16T10:04:16.964-04:00          (434-abc-345789-de456ght) HTTP Method: POST, Resource Path: /ddd/Verifyffghhjj/ddddddd 2024-08-16T10:04:25.969-04:00          (434-abc-345789-de456ght) Successfully completed execution 2024-08-16T10:04:25.969-04:00          (434-abc-345789-de456ght) Method completed with status: 200 2024-08-16T10:04:25.969-04:00          (434-abc-345789-de456ght)  AAAAAA Integration Endpoint RequestId: 11111111111111111111
Which (if any) fields do you already have extracted? Are the transaction ids unique i.e will there be only one "Starting ..." message and one "Successfully completed" message per transaction id? Pl... See more...
Which (if any) fields do you already have extracted? Are the transaction ids unique i.e will there be only one "Starting ..." message and one "Successfully completed" message per transaction id? Please can you share text versions of your events rather than pictures as they are easier to deal with when simulating a solution.
Hello , I have a transaction which is coming as multievent. i can use the  "| transaction" command to club as one event.  1)  I want the transaction ID extracted  based on the below-highlighted ( Gr... See more...
Hello , I have a transaction which is coming as multievent. i can use the  "| transaction" command to club as one event.  1)  I want the transaction ID extracted  based on the below-highlighted ( Green)  2) Now, I want to  get the transaction time  based on the below-highlighted  (Yellow) Below is the raw event log.   Thanks In advance!      
And how did you determine that the events are not collected? The typical issue with events which seem to be not collected (when the status does show returned events which should have been collected) ... See more...
And how did you determine that the events are not collected? The typical issue with events which seem to be not collected (when the status does show returned events which should have been collected) is when there is something wrong with timestamps so that the events are collected and indexed but are put somewhere (or rather somewhen ;-)) else than you expect them to be. Check your | tstats count on summary index over all time before and after you run the collecting search. This will tell you if your index grows.
On what home page? The only part of my installation that seems to contain the string "return to Splunk" is the splunk_rapid_diag app.
That's one way to do it. Judging from your working code you want to replace the single digit with 0<digit> in any of those two fields, not just when both parts are short (which was suggested by your... See more...
That's one way to do it. Judging from your working code you want to replace the single digit with 0<digit> in any of those two fields, not just when both parts are short (which was suggested by your initial sample). You can just do it with | input lookup dsa.csv | rex mode=sed field=Description "s/\b\d\b/0&/g"  
There is no such thing as "index listening". It's forwarder's job to collect data, prepare it properly (most importantly add proper metadata like source, sourcetype, host and destination index) and s... See more...
There is no such thing as "index listening". It's forwarder's job to collect data, prepare it properly (most importantly add proper metadata like source, sourcetype, host and destination index) and send it to the destination indexer or intermediate forwarder. So you don't have to change anything on the index side itself. Index is just a "bag" receiving events flowing from your forwarders. You need to find where the data comes from and check forwarder's configuration on that system. If this particular piece of configuration is being pushed from the deployment server in a pre-set state, that might be a bit more complicated. But the question which can affect other stuff as well (like apps assigned to this server) is how the server syslog_01 was "migrated" to syslog_02. Especially concerning the splunk forwarder's config. If it was simply moved from one server to another there is a possibility that the forwarder's name might have been set to a static value in the config and has been retained after the configuration was moved so your new forwarder will still report to your DS under the old name. Messy.
@yuanliu  This also working fine. Thanks for your suggestion.
  Hello Splunkers!! As per the below screenshot, you can see jobs are running fine. But events are not collecting into summary index. Please help me to suggest some potential reason and fixes ... See more...
  Hello Splunkers!! As per the below screenshot, you can see jobs are running fine. But events are not collecting into summary index. Please help me to suggest some potential reason and fixes   Scheduled search with push data to summary index.      
That's because you're collecting the contents of the event in a field called logEvent. If you want to collect this as raw event, you obviously have to set the _raw field. You are aware that using ot... See more...
That's because you're collecting the contents of the event in a field called logEvent. If you want to collect this as raw event, you obviously have to set the _raw field. You are aware that using other sourcetype than stash (or stash_hec for output_format=hec) uses up your license? You can also have issues with timestamps if you don't set _time properly before collecting (and generally you should set all default metadata fields)
You're getting close. One streamstats is not enough because you can't "pull" events you already passed while processing the stream. Assuming you want to find when you have at least three consecutiv... See more...
You're getting close. One streamstats is not enough because you can't "pull" events you already passed while processing the stream. Assuming you want to find when you have at least three consecutive ACC=1, you can do it like this   | eval Description=case(RML<104.008, "0", RML>108.425, "1", RML>=104.008, "OK", RML<=108.425, "OK") | eval Warning=case(Description==0, "LevelBreach", Description==1, "LevelBreach") | table LWL UWL RML | eval CR=if(RML<UWL,"0",if(RML>LWL,"1","0")) | streamstats window=3 sum(ACC) as running_count   This will mark the last of three consecutive ACC=1 with running_count=3. So we're on the right track so far we've found where our streak ends. Now we have to do a little trick since we can't pull events "from behind", we need to | reverse So that we're looking at our events in the other order. Now we know that event with running_count=3 will be starting our 3-event streak. So now we have to mark our 3 events looking forward | streamstats current=t window=3 max(running_count) as mark_count This will give us a value of markcount=3 for all events for which any of the last three events had  running_count of 3 (which means that we're no further than 3 events from the _last_ event of our 3 event streak).  Now all we have to do is find all those events we marked | where mark_count=3 And now we can just tidy up after ourseves | fields - running_count markcount | reverse And there you have it. Unfortunately since it uses the reverse command it can be quite memory consuming (and might even have some limits I'm not aware of at this time).