All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tayshawn, this isn't a Splunk question: if you AWS ECS sends logs in json format, you should ask to AWS if it's possible to have logs in a different format, but probably it's very difficoult! ... See more...
Hi @tayshawn, this isn't a Splunk question: if you AWS ECS sends logs in json format, you should ask to AWS if it's possible to have logs in a different format, but probably it's very difficoult! Anyway, if you use the Splunk Add-On for AWS, you should have the parser to read these logs and extract all the fields, so you can put them in a table as you want, but without changing the original source. Ciao. Giuseppe
Hi @Yashvik, I found an errore, even if it runs on my search, please try again this and check all the rows: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h... See more...
Hi @Yashvik, I found an errore, even if it runs on my search, please try again this and check all the rows: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | bin span=1d _time | stats values(st) AS sourcetype sum(b) AS volumeB by _time idx | rename idx AS index | eval volumeB=round(volumeB/1024/1024/1024,2) | sort 20 -volumeB Ciao. Giuseppe
Hi @nithys, if this solution works, good  for you: you solved your issue! see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao ... See more...
Hi @nithys, if this solution works, good  for you: you solved your issue! see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splun... See more...
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splunk on that particular SH. If i check the SH cluster status only 3 servers are showing now. Splunk installed version: 9.0.4.1 for error visibility Please find the attached.  Regards, Siva.
Hi @gcusello  I tried having below query which works if i select goodsdevelopment in the 1st dropdown , i get options pertained to airbag. if i select materialdomain in the 1st dropdown , i shou... See more...
Hi @gcusello  I tried having below query which works if i select goodsdevelopment in the 1st dropdown , i get options pertained to airbag. if i select materialdomain in the 1st dropdown , i should get options pertained to material,sm. 1.If I want use data entity dropdown to be multi select since domain can have multiple data entity…how the below query  need to be modified? ...I am using 3 different inputtoken  for inbound query&3 different outputtoken for outbound query 2.Also how do i auto cleared the existing search result pannel  whenver new domain is selected  Query used:       <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults           | eval goodsdevelopment="a",materialdomain="b,c",costsummary="d"</query>         </search>         <change>           <condition match="$label$==&quot;a&quot;">             <set token="inputToken">test</set>             <set token="outputToken">test1</set>           </condition>           <condition match="$label$==&quot;c&quot;">             <set token="inputToken">dev</set>             <set token="outputToken">dev1</set> <condition match="$label$==&quot;m&quot;"> <set token="inputToken">qa</set> <set token="outputToken">qa1</set> </condition> </change> ------ <row> <panel> <html id="messagecount"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">INBOUND </h2> </html> </panel> </row> <row> <panel><table> <search> <query>index=$indexToken1$ source IN ("/*-*-*-$inputToken$")  | timechart count by ObjectType```| stats count by ObjectType</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> ------<row> <panel> </style> <h2 id="user">outBOUND </h2> </html> <chart> <search> <query>index=$indexToken$ source IN ("/*e-f-$outputToken$-*-","*g-$outputToken$-h","i-$outputToken$-j") </query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search>
Hello everyone! We have a container service running on AWS ECS with Splunk log driver enabled (via HEC token).  At moment, we found log lines look awful (see below example). Also, no event level fil... See more...
Hello everyone! We have a container service running on AWS ECS with Splunk log driver enabled (via HEC token).  At moment, we found log lines look awful (see below example). Also, no event level filtered { [-]    line: xxxxxxxxx - - [16/Sep/2023:23:59:59 +0000] "GET /health HTTP/1.1" 200 236 "-" "ELB-HealthChecker/2.0" "-"    source: stdout    tag: xxxxxxxxxxx } Show as raw text host = xxx source = xxx source = xxx sourcetype = xxxx   We would like to make changes in Splunk to ensure the events are in a better-formatted standard as following: Sep 19 03:27:09 ip-xxx.xxxx xx[16151]: xxx ERROR xx - DIST:xx.xx BAS:8 NID:w-xxxxxx RID:b FID:bxxxx WSID:xxxx host = xxx level = ERROR source = xxx sourcetype =  xxx  We do have log forwarder rule configured (logs for other services are all formatted as above) . May I get some helps to reformat logs? Much appreciated! 
Extract the error number from the message and use that instead of message, e.g. index=etc message="error 1" OR message="error 2" OR message="error N" | rex field=message "error (?<error>\d+)" | cha... See more...
Extract the error number from the message and use that instead of message, e.g. index=etc message="error 1" OR message="error 2" OR message="error N" | rex field=message "error (?<error>\d+)" | chart count by instance_name, error You will have to change the regex in the rex statement so you extract what you want - the one above just extracts the number after the word "error " Note if you want the message to be one of A OR B OR C, you use message=A OR message=B OR message=C rather than message=A OR B OR C You can also use message IN ("A","B","C")  
Hello! I want to count how many different kind of errors appeared for different services.  At the moment, I'm searching for the errors like this  Index=etc message = "error 1" OR "error 2" OR ... "... See more...
Hello! I want to count how many different kind of errors appeared for different services.  At the moment, I'm searching for the errors like this  Index=etc message = "error 1" OR "error 2" OR ... "error N" | chart count by instance_name, message And I've got as a result: instance_name | "error 1 for us1" | "error 1 for us2" | ... | "error 1 for usN" | Other And under those column names, it shows how many times that error appeared. How can I count them without caring about the user and only caring about the "error 1" string? I mean, I want the result to look like Instance_name | error 1 | error2 |...| errorN
Hi @gcusello  I used the same search which you shared above and didn't made any changes. I will share the screenshot shortly as I am getting some errors in uploading the picture.
I have a CSV of URLs I need to search against my proxy index (the url field), I want to be able to do a count or match of the URLs. my csv looks like this (with the header of the column called kurl)... See more...
I have a CSV of URLs I need to search against my proxy index (the url field), I want to be able to do a count or match of the URLs. my csv looks like this (with the header of the column called kurl) kurl splunk.com youtube.com google.com So far, I have this SPL but it's only counting the matches, i need the URLs that don't exist to count 0     index="web_index" [| inputlookup URLs.csv | fields kurl | rename kurl as url] | stats count by url      
I am trying to create roles via API and here is the curl request. Question I have is, I am not able to add more than 1 index to the srchIndexesAllowed field either when I create the role or when I up... See more...
I am trying to create roles via API and here is the curl request. Question I have is, I am not able to add more than 1 index to the srchIndexesAllowed field either when I create the role or when I update the role. I am not able to find any Splunk documentation around the request body. Does anyone know how I can add/update multiple indexes for a role.       curl --location 'https://XXXXXXXXXXXXXXX/services/authorization/roles/fi_a00002-namespace_nonprod_power' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --header 'Authorization: Basic XXXXXXXXXXXXXXXX' \ --data-urlencode 'imported_roles=user' \ --data-urlencode 'srchIndexesAllowed=index1,index2' \ --data-urlencode 'srchIndexesDefault=index1,index2'      
There are a couple of ways you can do this, one with simple token usage and one with javascript. For the JS, see the 'Table row expansion' example in the Splunk dashboard examples app https://splun... See more...
There are a couple of ways you can do this, one with simple token usage and one with javascript. For the JS, see the 'Table row expansion' example in the Splunk dashboard examples app https://splunkbase.splunk.com/app/1603 there are some simple examples there. You can also do it something like this with tokens. This example dashboard shows how you can use a token to control what form of C1 looks like. See $tok_row$ usage. <form version="1.1"> <label>test</label> <init> <set token="tok_row">0</set> </init> <search id="base_data"> <query> | makeresults count=5 | fields - _time | streamstats c as row ``` lets say there is one table with 4 columns - C1, C2, C3, C4 and 5 rows - R1, R2, R3, R4, R5. Consider Column C2 has 1 value in R1, 10 values in R2, 4 values in R3, 5 values in R4, 2 values in R5.``` | eval C1=case(row=1, "Value1", row=2, split("Value1,Value2,Value3,Value4,Value5,Value6,Value7,Value8,Value9,Value10", ","), row=3, split("Value1,Value2,Value3,Value4", ","), row=4, split("Value1,Value2,Value3,Value4,Value5", ","), row=5, split("Value1,Value2", ",")) | eval C1=mvmap(C1, C1."_R".row) | foreach 2 3 4 [ eval C&lt;&lt;FIELD&gt;&gt;=random() % 10000 ] | eval C1_FULL=C1 </query> </search> <row> <panel> <table> <search base="base_data"> <query> | eval C1=if(row=$tok_row$, C1_FULL, mvindex(C1_FULL, 0, 0)) </query> </search> <fields>"C1","C2","C3","C4"</fields> <drilldown> <eval token="tok_row">if($row.row$=$tok_row$, 0, $row.row$)</eval> </drilldown> </table> </panel> </row> </form> Hope this gives you some ideas
You don't have any field constraint or prefix/suffix values in your ZoneId_tok token, so this search query index=5_ip_cnv sourcetype=ftae_hmi_alarms $Zoneid_tok$ |eval Time=_time |transaction Alarm ... See more...
You don't have any field constraint or prefix/suffix values in your ZoneId_tok token, so this search query index=5_ip_cnv sourcetype=ftae_hmi_alarms $Zoneid_tok$ |eval Time=_time |transaction Alarm startswith=*$Zoneid_tok$",1,0,192" endswith=*$Zoneid_tok$",0,0,192" maxevents=2 will translate into index=5_ip_cnv sourcetype=ftae_hmi_alarms ZONE_A OR ZONE_B OR ZONE_C... |eval Time=_time |transaction Alarm startswith=*ZONE_A OR ZONE_B OR ZONE_C...",1,0,192" endswith=*ZONE_A OR ZONE_B OR ZONE_C...",0,0,192" maxevents=2 the first search line may be fine with your data if you are just looking for those words in your raw data, but I expect that you do not have events that have the startswith and endswith strings with the expanded token string. Without seeing an example of your data, I suspect you do not need to specify the zone data in the startswith and endswith strings.  On a separate note regarding transaction, it can silently give you wrong results if your data set is large, as it will have to hold onto partial transactions until it finds an end event, so if you have long durations, you can potentially end up with results that are wrong. It is generally possible to use stats to replace transaction which can achieve the same thing, but doing so requires some knowledge of your data.    
Correct - if you are getting no results, all the hosts are reporting in the time period of your search.
Since 2020s the dropdown lookup moved to  ITSI content pack app There is a content pack DA-ITSI-CP-unix-dashboards that contains an automatic lookup for some sourcetypes. The lookup is Lookup-dropd... See more...
Since 2020s the dropdown lookup moved to  ITSI content pack app There is a content pack DA-ITSI-CP-unix-dashboards that contains an automatic lookup for some sourcetypes. The lookup is Lookup-dropdowns, relying on a csv lookup named dropdowns.csv. And coupled with an automatic lookup named "dropdownsLookup". The issue was that the csv lookup is not shipped with the app, but is being created/updated by a scheduled search "dropdowns_lookup_migrate" That scheduled search was running, but failed to create the lookup because of fields names issues. As a consequence, every time a search triggers the automatic lookup, the lookup throw an error because it does not find the csv file to load. To address the issue : - we ran the search "dropdowns_lookup_migrate" manually (without the append=t) to create the dropdowns lookup manually once to create the base lookup with the correct fields. If you have no entities, the lookup has only one line with "all_hosts". - then we waited for the search bundle to replicate to the indexers After that, the lookup error stopped.
Oh, thanks! It is working in the most cases. I found that it turns out there are cases when the installation event (new version) is generated faster than the removal event (old version). There are no... See more...
Oh, thanks! It is working in the most cases. I found that it turns out there are cases when the installation event (new version) is generated faster than the removal event (old version). There are not many such cases, about 50 hits per week, but maybe it is possible to take this case in query? Thank you again so much for your help.
Just wanted to add this one for future readers. Another important advantage of HEC over TCP is error handling. Specifically, if you send data to a TCP endpoint, there is no interaction. No respon... See more...
Just wanted to add this one for future readers. Another important advantage of HEC over TCP is error handling. Specifically, if you send data to a TCP endpoint, there is no interaction. No response from the TCP endpoint to let you know data has been received and processed. If there are load issues on the server or Queues are filled up, there is a chance that data will get lost. Data may get dropped and the sending process will not have any idea there was an issue. With HEC, you get an HTTP response such as a 400 or 500 error indicating problems. While most of the possible errors are specific to HEC, at least 2 would be an advantage over TCP.  (Server is busy and Internal Server Error) https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/TroubleshootHTTPEventCollector#Possible_error_codes Receiving these codes, a sender would know there is a problem.. And could attempt to resent the data again later.  You can also configure your "use Ack" which will allow the sender to check and confirm that data has been received and indexed before purging those events from the system. 
You're using the join command which spawns a subsearch. Subsearches have a limit on runtime as well as on returned results. You're hitting that limit. Try reworking your search so that you don't need... See more...
You're using the join command which spawns a subsearch. Subsearches have a limit on runtime as well as on returned results. You're hitting that limit. Try reworking your search so that you don't need to use join. It's often better to group your data with the stats command especially that both searches you're trying to join are from the same index. As a side note, with a raw search, I don't think there will be a noticeable difference between TERM(Application) and just searching for the string Application - there would be a huge difference though if you reworked your search | stats into a tstats-based search.
I have below query:  index=demo-app  TERM(Application) TERM(Received) NOR TERM(processed) |stats count by ApplicationId |fields ApplicationId |eval matchfield=ApplicationId |join matchfield [... See more...
I have below query:  index=demo-app  TERM(Application) TERM(Received) NOR TERM(processed) |stats count by ApplicationId |fields ApplicationId |eval matchfield=ApplicationId |join matchfield [search index=demo-app  TERM(App) TERM(transaction) |stats count by MessageCode |fields MessageCode |eval matchfield =MessageCode] |stats count(matchfield) When i run this search query the statics values are  limiting to 50,000 How to tweak my query to see complete results without restricting.  
There is no such thing as "generic PII data scan". Firstly, you need to define what you want to find, then define how this data can be expressed, then you search for it. And you'll always get false... See more...
There is no such thing as "generic PII data scan". Firstly, you need to define what you want to find, then define how this data can be expressed, then you search for it. And you'll always get false positives and false negatives. That's just how it is with automated searching for such loosely defined stuff. The more precisely defined format, the better (like IBAN numbers).