All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello there,   I recently created new trial instance for SPLUNK cloud, when i try login to the application it was not allowing me to login. getting login failed error.
Hi all,  I am using this plugin in order to extract the info from User Agent: uas_lookup SPL looks like this: "mysearch.. | rename User-Agent as http_user_agent | lookup uas_lookup http_user_age... See more...
Hi all,  I am using this plugin in order to extract the info from User Agent: uas_lookup SPL looks like this: "mysearch.. | rename User-Agent as http_user_agent | lookup uas_lookup http_user_agent" Error that I am getting is: Script execution failed for external search command '/opt/splunk/etc/apps/TA-uas_parser/bin/uas_lookup.py'. Anybody knows how to fix this? Thanks
Hi , i m getting this error below but when i try to curl with jira credentials on my search head, it is giving the output ConnectionError at "/opt/splunk/etc/apps/TA-jira-service-desk-simple-addon... See more...
Hi , i m getting this error below but when i try to curl with jira credentials on my search head, it is giving the output ConnectionError at "/opt/splunk/etc/apps/TA-jira-service-desk-simple-addon/bin/ta_jira_service_desk_simple_addon/aob_py2/requests/adapters.py", line 516 : HTTPSConnectionPool(host='abc.com', port=123): Max retries exceeded with url: /rest/api/latest/issuetype (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fbd66deac10>: Failed to establish a new connection: [Errno 110] Connection timed out',)) Any suggestions
I am stuck in this page when I try to configure website monitoring. Please provide some inputs. After I click Save Configuration it does not move further.
I have two fields in two different log lines and want result something like below sample table :- product_code_pause count product_code_unpause count 1234567 3 1234567 2   How can I... See more...
I have two fields in two different log lines and want result something like below sample table :- product_code_pause count product_code_unpause count 1234567 3 1234567 2   How can I achieve this by updating below query :- ("Pause entry") OR ("Paused Entry added back to cart successfully : ") | rex field=_raw "product : (?<product_code_pause>(?:[^,]+))" | rex field=_raw "successfully : (?<product_code_unpause>(?:[^,]+))" | stats count by product_code_pause,product_code_unpause
Hi, How can I index multiple file with only one INGEST-EVAL ? For instance, I have a filename that can change :  prod-1-%d%m%Y%H%M%S.txt prod-2-%d%m%Y%H%M%S.txt prod-3-%d%m%Y%H%M%S.txt I tried ... See more...
Hi, How can I index multiple file with only one INGEST-EVAL ? For instance, I have a filename that can change :  prod-1-%d%m%Y%H%M%S.txt prod-2-%d%m%Y%H%M%S.txt prod-3-%d%m%Y%H%M%S.txt I tried this : [timestampeval] INGEST_EVAL = _time=strptime(replace(source,".*(?=/)/",""),"prod-.-%d%m%Y%H%M%S.txt") But doesn't work... 
Hi All, I have a requirement wherein I count a specific log in the last minute. The count is supposed to be 1.  I need to convert this count to boolean to show in my visualization. Something like, ... See more...
Hi All, I have a requirement wherein I count a specific log in the last minute. The count is supposed to be 1.  I need to convert this count to boolean to show in my visualization. Something like, if count = 1 then True else False.  I need only true or false as output of the query and not with count.  I'm basically trying to create a application status monitoring! Any pointers? Regards, Sharad R K
example i have multiple indexes like index1, index2, index3. amoung these a field named "Category" presents in index2. I want to write a query without using OR (index=index1 OR index=index2 OR ind... See more...
example i have multiple indexes like index1, index2, index3. amoung these a field named "Category" presents in index2. I want to write a query without using OR (index=index1 OR index=index2 OR index=index3), which searches my field "Category" first in index1 then comes to index2 finds the result and doesnt go to index3. reason being i dont want to load a index after i found my result in previous index. if i go with OR it loads all three indexes in the result. Someone please help me here..
Hi, I am facing a problem while passing latest value to drill down form, So when I click on each row i want the drill to show the transactions in that time range alone dynamically.   <dashboard> ... See more...
Hi, I am facing a problem while passing latest value to drill down form, So when I click on each row i want the drill to show the transactions in that time range alone dynamically.   <dashboard> <label>Sample</label> <row> <panel> <chart> <search> <query>index=_internal | timechart count by sourcetype</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">all</option> <drilldown> <set token="name2">$click.name2$</set> <eval token="ear">$click.value$</eval> <eval token="lat">relative_time($click.value$,"+59m")</eval> </drilldown> </chart> </panel> </row> <row> <panel> <chart> <title>$name2$</title> <search> <query>index=_internal sourcetype=$name2$ | timechart span=1m count</query> <earliest>$ear$</earliest> <latest>$lat$</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </dashboard> Above dashboard working fine as expected but if select more then 24 hours getting a problem. Expected result: If i select <= 24 hours, latest time value should be  i.e, <eval token="lat">relative_time($click.value$,"+59m")</eval> same like 7 days - "+1d", 30days  - "+7d", YeartoDate - "+1mon" Please help me on this. Thanks in advance.    
I need to ingest the data from DB to Splunk via DBCONNECT. Need to choose a column for a RISING column which has a data and time field in it with below format : starttime=2020-06-16 01:26:35.595026... See more...
I need to ingest the data from DB to Splunk via DBCONNECT. Need to choose a column for a RISING column which has a data and time field in it with below format : starttime=2020-06-16 01:26:35.595026665 How shall I convert the above time format at the query level so that it is used for checkpoint as a primary key to use it as a RISING Column .
Hello I have this command: | metadata type=sourcetypes index=wineventlog The problem is that there are returned multiple lines for  "WinEventLog" sourcetype so I dont understand why that when the ... See more...
Hello I have this command: | metadata type=sourcetypes index=wineventlog The problem is that there are returned multiple lines for  "WinEventLog" sourcetype so I dont understand why that when the names are absolutely same. I expect to get 1 line per sourcetype. If I search with index=wineventlog and stats by sourcetype - there is no problem so it is something with metadata command Same issue  for "wineventlog" sourcetype
Sample query:  index=* app_name="batch" OR app_name=sbond* ("All feed is completed" OR "Test Success: Test" OR "Test1 Success: Test1" OR "Finished handshake success" )|bucket span=1d _time|eval day... See more...
Sample query:  index=* app_name="batch" OR app_name=sbond* ("All feed is completed" OR "Test Success: Test" OR "Test1 Success: Test1" OR "Finished handshake success" )|bucket span=1d _time|eval dayweek=strftime(_time,"%A")|convert timeformat="%m-%d-%y" ctime(_time) as c_time | eval Job = case(like(_raw, "%All feed is completed%"), "first Job", like(_raw, "%Test Success: Test%"), "second Job", like(_raw, "%Test1 Success: Test1%"), "third job",like(_raw, "%Finished handshake success%"), "Genius job", 1==1, "Incorrect searchString match, please refactor")| stats count by Job c_time dayweek|eval status=case((Job="Genius job") AND (dayweek="Saturday" OR dayweek="Sunday"),"NA",count>0,"Success",count<0,"Failure")| xyseries Job c_time status Actual result:for 30 days it shows sideways . jobname date1 date2 date3 date4 date5 date6 date7 date8 date9 xxx                      Expected result: split the result to 7 days  jobname date1 date2 date3 date4 date5 date6 date7 xxx                 jobname date8 date9 date10 date11 date12 date13 date14 xxx                 Thanks in Advance
I have a repeating j son payload appearing in my logs. I am interested in capturing the last payload from the logs. right now I am seeing 3 events with below search query, but I wanted the last eve... See more...
I have a repeating j son payload appearing in my logs. I am interested in capturing the last payload from the logs. right now I am seeing 3 events with below search query, but I wanted the last event here is my search query search query   index=abc_applications cf_space_name=production cf_app_name="my-app-name" "\"newAction\":\"request-change"\" AND "Final obj-1----------" | rex field=_raw "Final obj-1----------(?P<json_data_1>\{.*\})" | eval json_data = mvindex(json_data_1, -1) | spath input=json_data | rename data.cRID as CRID | eval Attachment_Count = spath(json_data, "changeAttachment{}") | eval Approver_Count = spath(json_data, "changeApprover{}") | eval Config_Count = spath(json_data, "changeConfigItem{}")| stats count(Attachment_Count) as Attachment_Count, count(Approver_Count) as Approver_Count, count(Config_Count) as Config_Item_Count by CRID   this is how my logs appear you will not see this text(====start====) (====end===) in the logs, just for understanding purpose I added this line, to differentiate repeating logs The logs are exactly identical and repeating in pattern payload is here   this is how my logs appear you will not see this text(====start====) (====end===) in the logs, just for understanding purpose I added this line, to differentiate repeating logs The logs are exactly identical and repeating in pattern ================start============= Final obj-1---------- { "action":"Waiting Approval", "changeConfigItem":[ {} ], "changeApprover":[ {} ], "changeAttachment":[ {}, {} ] "newAction":"request-change" } ================end============= ==========start================== Final obj-1---------- { "action":"Waiting Approval", "changeConfigItem":[ {} ], "changeApprover":[ {} ], "changeAttachment":[ {}, {} ], "data":{ "cRID":"1111"} "newAction":"request-change" } ==========end================== ==========start================== Final obj-1---------- { "action":"Waiting Approval", "changeConfigItem":[ {} ], "changeApprover":[ {} ], "changeAttachment":[ {}, {} ] "newAction":"request-change" }, "data":{ "cRID":"1111"} ==========end==================  
Hi Splunk Team! I recently received messages like the following how do i fix it Thanks!
My Network_Traffic data model was working just fine this morning. I stopped the acceleration so that I could add more fields to the All_Traffic data set. It seems that after I did that, it no longer ... See more...
My Network_Traffic data model was working just fine this morning. I stopped the acceleration so that I could add more fields to the All_Traffic data set. It seems that after I did that, it no longer captures any events. I even tried replacing the original constraint of "(`cim_Network_Traffic_indexes`) tag=network tag=communicate" with "index=*" and I still don't get any events during the preview. I tried rebuilding the summaries and that didn't seem to fix the issue. I've also restarted the Splunk Enterprise instance and the server itself with no luck. Lastly, I cloned the data model just for fun but  I still get the same behavior. Has anyone experienced this? If so, were you able to resolve the issue? 
Hi All, Hoping someone can point me in the right direction with this one.  The use case is there are some processes that I need to be checking if data is being written to their logs (that is the eas... See more...
Hi All, Hoping someone can point me in the right direction with this one.  The use case is there are some processes that I need to be checking if data is being written to their logs (that is the easy part) and I also need to note if there is a lack of data by host. I used a lookup file that I add to the search in the scenarios where there is a potential issue and I need to indicate that the host has no data.  I managed to get it working and I've combined a number of different processes and use cases together into one search however I've used the append command.  Unfortunately the Splunk admins in my company do not allow for appends in any search (it's a big no no) regardless of the data size which in this case isn't large.   This is what the search looks like currently:   index=test_index sourcetype=process_a_log "Success Message" earliest=-2h | inputlookup append=t hosts.csv | fields host | stats latest(_indextime) as indexedTime by host | eval count=if(isnull(indexedTime),null,(now()-indexedTime)) | eval process="ProcessA" | table host,process,count | append [ search index=test_index sourcetype=process_b_log "Generation completed" earliest=-1h | inputlookup append=t hosts.csv | fields host | stats latest(_indextime) as indexedTime by host | eval count=if(isnull(indexedTime),null,(now()-indexedTime)) | eval process="ProcessB" | table host,process,count ] | append [ search index=test_index earliest=-5m | inputlookup append=t hosts.csv | fields host | stats latest(_indextime) as indexedTime by host | eval count=if(isnull(indexedTime),null,(now()-indexedTime)) | eval process="Data" | table host,process,count ]   I've omitted parts at the bottom where I do an evaluation on thresholds and output severity. I attempted to do something like this:   index=test_index (sourcetype=process_a_log "Success Message" earliest=-2h) OR (sourcetype=process_b_log "Generation completed" earliest=-1h) OR (sourcetype=* earliest=-5m) | inputlookup append=t hosts.csv | fields host | stats latest(_indextime) as indexedTime by host | eval count=if(isnull(indexedTime),null,(now()-indexedTime)) | eval process=case( match(_raw,"Success Message"),"ProcessA", match(_raw,"Generation completed"),"OrocessB", 1=1,"Other") | table host,process,count   However it doesn't produce the outcome I require given all the events for all the processes are together and while it appends the host, I need it to append by process.  Basically I need something along the lines of 'inputlookup append=t by process' but unsure how to achieve it. Any help would be greatly appreciated.  Thanks.
After successfully blacklisting DGA domains would it be possible to track those domains to an IP address which can later give as Geo-coordinates that would made it possible to  map it on  a dashboard... See more...
After successfully blacklisting DGA domains would it be possible to track those domains to an IP address which can later give as Geo-coordinates that would made it possible to  map it on  a dashboard. I know there are some python modules that can identify IP address from legit domains, it would be great if you could  look into a method/resource  that will potentially do the same for DGA domains. That would add a lot more value to the app.  Respectfully,  Hakob   
HI Team, I would like to use join to search for "id" and pass it to sub search and need the consolidate result with time. search 1: searching for value next to "id" provide me list index=TEST sour... See more...
HI Team, I would like to use join to search for "id" and pass it to sub search and need the consolidate result with time. search 1: searching for value next to "id" provide me list index=TEST sourcetype=source1 url="/api/v1/test" | rex "'id':'(?<id>[\d.]+)" | table _time id Above search gives me integer "id" I will pass in search2. Search2:  index=TEST sourcetype=source2 url="/api/*/values(id)/*" Response_Status="200" | table url _time   I need output from search2 referencing id from search 1   
In tenable.sc we have the option of grouping assets into lists and giving them a specific name. When using the tenable addon for splunk neither the asset nor the vulnerability data has that I could f... See more...
In tenable.sc we have the option of grouping assets into lists and giving them a specific name. When using the tenable addon for splunk neither the asset nor the vulnerability data has that I could find a field with which assets a particular system might be associated with. Is there a way to import the asset list information into splunk otherwise? Or is the information already included somewhere and I can't just find it.
Hello Team,   I have below search but I want to compare today's data with Yesterday's data in same way this week data with last week data, how do I do that? source=/var/log/cassandra/system.log in... See more...
Hello Team,   I have below search but I want to compare today's data with Yesterday's data in same way this week data with last week data, how do I do that? source=/var/log/cassandra/system.log index=cassdb_dev host="hostname" GCInspector.java NOT tombstone_warn_threshold |rex "GC in (?<GCtime>\d+)"| timechart span=1s values(GCtime) as GCtime by host   I really appreciate your quick help!   Thanks Chandra