All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Extract the error number from the message and use that instead of message, e.g. index=etc message="error 1" OR message="error 2" OR message="error N" | rex field=message "error (?<error>\d+)" | cha... See more...
Extract the error number from the message and use that instead of message, e.g. index=etc message="error 1" OR message="error 2" OR message="error N" | rex field=message "error (?<error>\d+)" | chart count by instance_name, error You will have to change the regex in the rex statement so you extract what you want - the one above just extracts the number after the word "error " Note if you want the message to be one of A OR B OR C, you use message=A OR message=B OR message=C rather than message=A OR B OR C You can also use message IN ("A","B","C")  
Hello! I want to count how many different kind of errors appeared for different services.  At the moment, I'm searching for the errors like this  Index=etc message = "error 1" OR "error 2" OR ... "... See more...
Hello! I want to count how many different kind of errors appeared for different services.  At the moment, I'm searching for the errors like this  Index=etc message = "error 1" OR "error 2" OR ... "error N" | chart count by instance_name, message And I've got as a result: instance_name | "error 1 for us1" | "error 1 for us2" | ... | "error 1 for usN" | Other And under those column names, it shows how many times that error appeared. How can I count them without caring about the user and only caring about the "error 1" string? I mean, I want the result to look like Instance_name | error 1 | error2 |...| errorN
Hi @gcusello  I used the same search which you shared above and didn't made any changes. I will share the screenshot shortly as I am getting some errors in uploading the picture.
I have a CSV of URLs I need to search against my proxy index (the url field), I want to be able to do a count or match of the URLs. my csv looks like this (with the header of the column called kurl)... See more...
I have a CSV of URLs I need to search against my proxy index (the url field), I want to be able to do a count or match of the URLs. my csv looks like this (with the header of the column called kurl) kurl splunk.com youtube.com google.com So far, I have this SPL but it's only counting the matches, i need the URLs that don't exist to count 0     index="web_index" [| inputlookup URLs.csv | fields kurl | rename kurl as url] | stats count by url      
I am trying to create roles via API and here is the curl request. Question I have is, I am not able to add more than 1 index to the srchIndexesAllowed field either when I create the role or when I up... See more...
I am trying to create roles via API and here is the curl request. Question I have is, I am not able to add more than 1 index to the srchIndexesAllowed field either when I create the role or when I update the role. I am not able to find any Splunk documentation around the request body. Does anyone know how I can add/update multiple indexes for a role.       curl --location 'https://XXXXXXXXXXXXXXX/services/authorization/roles/fi_a00002-namespace_nonprod_power' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --header 'Authorization: Basic XXXXXXXXXXXXXXXX' \ --data-urlencode 'imported_roles=user' \ --data-urlencode 'srchIndexesAllowed=index1,index2' \ --data-urlencode 'srchIndexesDefault=index1,index2'      
There are a couple of ways you can do this, one with simple token usage and one with javascript. For the JS, see the 'Table row expansion' example in the Splunk dashboard examples app https://splun... See more...
There are a couple of ways you can do this, one with simple token usage and one with javascript. For the JS, see the 'Table row expansion' example in the Splunk dashboard examples app https://splunkbase.splunk.com/app/1603 there are some simple examples there. You can also do it something like this with tokens. This example dashboard shows how you can use a token to control what form of C1 looks like. See $tok_row$ usage. <form version="1.1"> <label>test</label> <init> <set token="tok_row">0</set> </init> <search id="base_data"> <query> | makeresults count=5 | fields - _time | streamstats c as row ``` lets say there is one table with 4 columns - C1, C2, C3, C4 and 5 rows - R1, R2, R3, R4, R5. Consider Column C2 has 1 value in R1, 10 values in R2, 4 values in R3, 5 values in R4, 2 values in R5.``` | eval C1=case(row=1, "Value1", row=2, split("Value1,Value2,Value3,Value4,Value5,Value6,Value7,Value8,Value9,Value10", ","), row=3, split("Value1,Value2,Value3,Value4", ","), row=4, split("Value1,Value2,Value3,Value4,Value5", ","), row=5, split("Value1,Value2", ",")) | eval C1=mvmap(C1, C1."_R".row) | foreach 2 3 4 [ eval C&lt;&lt;FIELD&gt;&gt;=random() % 10000 ] | eval C1_FULL=C1 </query> </search> <row> <panel> <table> <search base="base_data"> <query> | eval C1=if(row=$tok_row$, C1_FULL, mvindex(C1_FULL, 0, 0)) </query> </search> <fields>"C1","C2","C3","C4"</fields> <drilldown> <eval token="tok_row">if($row.row$=$tok_row$, 0, $row.row$)</eval> </drilldown> </table> </panel> </row> </form> Hope this gives you some ideas
You don't have any field constraint or prefix/suffix values in your ZoneId_tok token, so this search query index=5_ip_cnv sourcetype=ftae_hmi_alarms $Zoneid_tok$ |eval Time=_time |transaction Alarm ... See more...
You don't have any field constraint or prefix/suffix values in your ZoneId_tok token, so this search query index=5_ip_cnv sourcetype=ftae_hmi_alarms $Zoneid_tok$ |eval Time=_time |transaction Alarm startswith=*$Zoneid_tok$",1,0,192" endswith=*$Zoneid_tok$",0,0,192" maxevents=2 will translate into index=5_ip_cnv sourcetype=ftae_hmi_alarms ZONE_A OR ZONE_B OR ZONE_C... |eval Time=_time |transaction Alarm startswith=*ZONE_A OR ZONE_B OR ZONE_C...",1,0,192" endswith=*ZONE_A OR ZONE_B OR ZONE_C...",0,0,192" maxevents=2 the first search line may be fine with your data if you are just looking for those words in your raw data, but I expect that you do not have events that have the startswith and endswith strings with the expanded token string. Without seeing an example of your data, I suspect you do not need to specify the zone data in the startswith and endswith strings.  On a separate note regarding transaction, it can silently give you wrong results if your data set is large, as it will have to hold onto partial transactions until it finds an end event, so if you have long durations, you can potentially end up with results that are wrong. It is generally possible to use stats to replace transaction which can achieve the same thing, but doing so requires some knowledge of your data.    
Correct - if you are getting no results, all the hosts are reporting in the time period of your search.
Since 2020s the dropdown lookup moved to  ITSI content pack app There is a content pack DA-ITSI-CP-unix-dashboards that contains an automatic lookup for some sourcetypes. The lookup is Lookup-dropd... See more...
Since 2020s the dropdown lookup moved to  ITSI content pack app There is a content pack DA-ITSI-CP-unix-dashboards that contains an automatic lookup for some sourcetypes. The lookup is Lookup-dropdowns, relying on a csv lookup named dropdowns.csv. And coupled with an automatic lookup named "dropdownsLookup". The issue was that the csv lookup is not shipped with the app, but is being created/updated by a scheduled search "dropdowns_lookup_migrate" That scheduled search was running, but failed to create the lookup because of fields names issues. As a consequence, every time a search triggers the automatic lookup, the lookup throw an error because it does not find the csv file to load. To address the issue : - we ran the search "dropdowns_lookup_migrate" manually (without the append=t) to create the dropdowns lookup manually once to create the base lookup with the correct fields. If you have no entities, the lookup has only one line with "all_hosts". - then we waited for the search bundle to replicate to the indexers After that, the lookup error stopped.
Oh, thanks! It is working in the most cases. I found that it turns out there are cases when the installation event (new version) is generated faster than the removal event (old version). There are no... See more...
Oh, thanks! It is working in the most cases. I found that it turns out there are cases when the installation event (new version) is generated faster than the removal event (old version). There are not many such cases, about 50 hits per week, but maybe it is possible to take this case in query? Thank you again so much for your help.
Just wanted to add this one for future readers. Another important advantage of HEC over TCP is error handling. Specifically, if you send data to a TCP endpoint, there is no interaction. No respon... See more...
Just wanted to add this one for future readers. Another important advantage of HEC over TCP is error handling. Specifically, if you send data to a TCP endpoint, there is no interaction. No response from the TCP endpoint to let you know data has been received and processed. If there are load issues on the server or Queues are filled up, there is a chance that data will get lost. Data may get dropped and the sending process will not have any idea there was an issue. With HEC, you get an HTTP response such as a 400 or 500 error indicating problems. While most of the possible errors are specific to HEC, at least 2 would be an advantage over TCP.  (Server is busy and Internal Server Error) https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/TroubleshootHTTPEventCollector#Possible_error_codes Receiving these codes, a sender would know there is a problem.. And could attempt to resent the data again later.  You can also configure your "use Ack" which will allow the sender to check and confirm that data has been received and indexed before purging those events from the system. 
You're using the join command which spawns a subsearch. Subsearches have a limit on runtime as well as on returned results. You're hitting that limit. Try reworking your search so that you don't need... See more...
You're using the join command which spawns a subsearch. Subsearches have a limit on runtime as well as on returned results. You're hitting that limit. Try reworking your search so that you don't need to use join. It's often better to group your data with the stats command especially that both searches you're trying to join are from the same index. As a side note, with a raw search, I don't think there will be a noticeable difference between TERM(Application) and just searching for the string Application - there would be a huge difference though if you reworked your search | stats into a tstats-based search.
I have below query:  index=demo-app  TERM(Application) TERM(Received) NOR TERM(processed) |stats count by ApplicationId |fields ApplicationId |eval matchfield=ApplicationId |join matchfield [... See more...
I have below query:  index=demo-app  TERM(Application) TERM(Received) NOR TERM(processed) |stats count by ApplicationId |fields ApplicationId |eval matchfield=ApplicationId |join matchfield [search index=demo-app  TERM(App) TERM(transaction) |stats count by MessageCode |fields MessageCode |eval matchfield =MessageCode] |stats count(matchfield) When i run this search query the statics values are  limiting to 50,000 How to tweak my query to see complete results without restricting.  
There is no such thing as "generic PII data scan". Firstly, you need to define what you want to find, then define how this data can be expressed, then you search for it. And you'll always get false... See more...
There is no such thing as "generic PII data scan". Firstly, you need to define what you want to find, then define how this data can be expressed, then you search for it. And you'll always get false positives and false negatives. That's just how it is with automated searching for such loosely defined stuff. The more precisely defined format, the better (like IBAN numbers).
You can create a lookup with a WILDCARD match type.
If there are no new entries in your access log it could signal storage problems. Did you check your free disk space?
What do you mean by "added"? @ITWhisperer 's search should be run on its own, not added to your search. Alternatively, you can try to count splitting by time so you can limit your search to a partic... See more...
What do you mean by "added"? @ITWhisperer 's search should be run on its own, not added to your search. Alternatively, you can try to count splitting by time so you can limit your search to a particular month or week (I think with a day resolution it could still run but go more densely and you won't visualize it reasonably). | tstats prestats=t count where index=<your_index> host=<your_host> by _time span=1w | timechart span=1w count  
Hello @ITWhisperer  I added the host name to the query provided and ran a search but i am not seeing any results under statistics tab. Is result=0 means that the host is reporting and that is the ... See more...
Hello @ITWhisperer  I added the host name to the query provided and ran a search but i am not seeing any results under statistics tab. Is result=0 means that the host is reporting and that is the reason we are not seeing results? Can you please confirm?   Thanks
I have logs with a Customer field where the name of the customer is not consistent.     customer=Bobs Pizza   customer=Bob's Pizza   customer=Bobs Pizzeria I want to use an automatic lookup to ... See more...
I have logs with a Customer field where the name of the customer is not consistent.     customer=Bobs Pizza   customer=Bob's Pizza   customer=Bobs Pizzeria I want to use an automatic lookup to change all to a standard name without needing to changing existing searches.   customer_lookup.csv   customer_name,standard_customer_name   Bobs Pizza,Bob's Pizza   Bobs Pizzeria,Bob's Pizza I am trying to do this with a lookup table in the search before I try to make it an automatic lookup.  | lookup customer_lookup customer_name as Customer output standard_customer_name AS Customer This lookup only works if the Customer returned in the search is actually in the lookup table.  So Customer="Bobs Pizza" is in the result, but Customer="Frank's   Artichokes" is not.  I can't add all customers to the table.  I have tried many forms of the lookup.  I can get a list with the original Customer name and the standard customer name in one exists, but that won't work for current searches.      Can this be done?  I would think it could cause problems since someone could add an automatic lookup to hide certain things if needed.  
We are trying to use the appdynamics node dependency and are currently unable to resolve it. It appears that it's unavailable at the expected AppDynamics CDN location. Last week, version 23.5 was suc... See more...
We are trying to use the appdynamics node dependency and are currently unable to resolve it. It appears that it's unavailable at the expected AppDynamics CDN location. Last week, version 23.5 was successfully found and downloaded, but today neither 23.5 nor 23.7 appears to be available: npm install appdynamics npm ERR! code E404 npm ERR! 404 Not Found - GET https://cdn.appdynamics.com/packages/nodejs/23.7.0.0/appdynamics-native-node.tgz npm ERR! 404 npm ERR! 404 'appdynamics-native@https://cdn.appdynamics.com/packages/nodejs/23.7.0.0/appdynamics-native-node.tgz' is not in this registry. npm ERR! 404 npm ERR! 404 Note that you can also install from a npm ERR! 404 tarball, folder, http url, or git url. Has anyone else been able to resolve this issue, or is there a known issue resolving this dependency?