All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am a newbie in Splunk. I have to write a splunk query to get the status_code count for error(status range 300 and above) and success(status range 200-299) by host. This is my current search(2... See more...
Hi, I am a newbie in Splunk. I have to write a splunk query to get the status_code count for error(status range 300 and above) and success(status range 200-299) by host. This is my current search(24 hrs) but unfortunately return 0 result except for host list displayed index=xxxx  host=*  status=* | stats count(status>=300) as "Error", count(status<299) as "OK" by host Expected result: Host          |          Error          | OK ---------------------------------------- xxxx           |          23              |  1
Hi,  I have several model id: 12310, 12320, 12330. If the suffixes = "10", "20", "30", I define the typemachine accordingly. type typemachine 10 car 20 moto 30 bic... See more...
Hi,  I have several model id: 12310, 12320, 12330. If the suffixes = "10", "20", "30", I define the typemachine accordingly. type typemachine 10 car 20 moto 30 bicycle   | eval typemachine=case(type="10", "car", type="20", "moto ", type="30", "bicycle", 1=1, "autre") However I want to add the exception, if id=56410 or 65210, it must be the "moto". Can I do it, please?  Thanks
Hello all. I am making a dashboard in which I was in the need to create a  subsearch.  This is the piece of code that does it:   <panel> <title>random panel title</title> <table depends="$sho... See more...
Hello all. I am making a dashboard in which I was in the need to create a  subsearch.  This is the piece of code that does it:   <panel> <title>random panel title</title> <table depends="$show_debug$"> <search id="Build_list"> <query>index= here it goes my query |fields * |table important_field |format</query> <finalized> <condition match=" 'job.resultCount' != 0"> <set token="my_list">$result.search$</set> </condition> <condition> <unset token="my_list"></unset> </condition> </finalized> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <title>another panel</title> <html depends="$show_debug$"> <h3>$my_list$</h3> </html> </panel>   and here I am using the $my_list$ token:   <search> <query>$my_list$ | foreach something.* [rename "&lt;&lt;FIELD&gt;&gt;" as "&lt;&lt;MATCHSEG1&gt;&gt;"] | stats values(Url),values(UrlDomain)</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search>     This worked well the first time,  but now,  for every new query I do, no matter if I close and open a new browser/splunk session,  I see still the results of the first query I did.  is like  $my_list$ has the first ever values hardened and I cannot reset them.  I though that <unset token="my_list"></unset> would clear it but not....   Any help please?  The goal here is to use this $my_list$ token, which is a splunk query  (note the |format at the end of the query)  but of course this token needs to be empty every time I run a new query.   Thanks a lot in advance. Mario  
Hello, I have data coming in near real-time to a host (Linux)  where UF installed on it. It's a new push, objective is to send these events to SPLUNK indexer to view them from search head. Everythi... See more...
Hello, I have data coming in near real-time to a host (Linux)  where UF installed on it. It's a new push, objective is to send these events to SPLUNK indexer to view them from search head. Everything on place except I need to put new props.conf, inputs.conf, and transforms.conf files into that server. My question is where and how should I put those configuration files. Create a new folder local under etc/apps/ folder from CLI and copy all these 3 configuration files Or copy all these configuration files into ......etc/system/local folder.....or ....? Any recommendations will be highly appreciated. Thank you so much. 
Hi Team, Indexer is going down very frequently due to too many open files  currently ulimit value for open files on the server is 128000 are we good to increase the Ulimit value more than 20000... See more...
Hi Team, Indexer is going down very frequently due to too many open files  currently ulimit value for open files on the server is 128000 are we good to increase the Ulimit value more than 200000, if we increase the ulimit value the open file issue will resolve? if we increase any other impact is there on the indexer. Kindly assist me on this issue. Thanks
Hi all, I've been working on this query for the last few days and still can't seem to crack it. (Appreciate the person from this forum that helped me get this far!) I'm trying to create 2 groups:... See more...
Hi all, I've been working on this query for the last few days and still can't seem to crack it. (Appreciate the person from this forum that helped me get this far!) I'm trying to create 2 groups: (1) Engaged, and (2) Not Engaged. The events are grouped by sessionId, which is a single conversation with a chatbot. A response from the bot or customer is an intent. So I'm trying to count sessionIds with 5 or more intents as Engaged. Here's the idea, but obviously this query won't work since the bottom two eval commands are invalid.      index=conversation botId=ccbd | eval intent_count=if(like(intent,"%"), "1", "0") | stats sum(intent_count) as intent_count by sessionId | eval engaged=(where intent_count => 5) | eval not_engaged(where intent_count <5)     Then, create a table that would look like this:     HEADER TOTALS Engaged 100 Not_engaged 100      
Hi all, I have been working with Splunk SOAR Community Edition for some time. Now I am wondering how the, is it called Enterprise Version?!, differs from the Community Version. Unfortunately I ha... See more...
Hi all, I have been working with Splunk SOAR Community Edition for some time. Now I am wondering how the, is it called Enterprise Version?!, differs from the Community Version. Unfortunately I have not found any documentation on this. Could someone provide me with some information? Thanks a lot simon  
Why is dashboard "Schedule PDF Delivery" receiving wrong results in Splunk Enterprise Version: 8.2.4 ?For example,  if the dashboard contains field called students_number it's value 2344, In Schedule... See more...
Why is dashboard "Schedule PDF Delivery" receiving wrong results in Splunk Enterprise Version: 8.2.4 ?For example,  if the dashboard contains field called students_number it's value 2344, In Schedule PDF Delivery the value is 15782943.
I have created trial account but can't create app and couldn't login with account and user. Also can't reset the password.
Hello, I am working on a Splunk query and I need help adjusting my rex command to get two fields that are in one field into their own fields. Example below: index=test sourcetype=test category=test ... See more...
Hello, I am working on a Splunk query and I need help adjusting my rex command to get two fields that are in one field into their own fields. Example below: index=test sourcetype=test category=test | rex field=user "(?<region>[^\/]+)\/(?<username>[^\w].+)" | fillnull t | sort _time | table _time, username, user, region, sourcetype,  result, t | bin span=1d _time | dedup t The user field has: test\test1 and I need it to split that so username=test region=test1
I want to show statistics of daily volume and latest events for all the sourcetypes in single table, can you please help.
Does anyone know if the current Dynatrace add on will be updated to use the Dynatrace V2 API? We have a requirement to ingest some web app metrics from dynatrace that are not easily available to th... See more...
Does anyone know if the current Dynatrace add on will be updated to use the Dynatrace V2 API? We have a requirement to ingest some web app metrics from dynatrace that are not easily available to the V1 API and would also like to know that the add on will be functional if/when the V1 API is made redundant.  
Hi Community,   I have two separate Splunk installs: one is the 8.1.0 version and another one is 8.2.5 The older version is our production Splunk install. I can see a lag in the dashboard set-u... See more...
Hi Community,   I have two separate Splunk installs: one is the 8.1.0 version and another one is 8.2.5 The older version is our production Splunk install. I can see a lag in the dashboard set-up which calculates the difference between the index time and the actual time. Since its production environment, I assumed that the lag might be due to the below reasons. The universal forwarder is busy as it's doing a recursive search through all the files within the folders. This is done for almost 44 such folders. Example: [monitor:///net/mx41779vm/data/apps/Kernel_2.../*.log] The forwarder might be outdated to handle such loads. The version used is 6.3.3 Splunk install is busy waiting as there is already a lot of incoming data from other forwarders. In order to clarify the issue, I set up the same in another environment. This is a test environment which does not have a heavy load as in production but has the same settings with reduced memory. When I set up a completely new forwarder, and replicate the setup in the test environment, I still see the same lag. This is very confusing as to why it's happening? Could someone provide me with tips or guidance on how to work through this issue? Thanks in advance.   Regards, Pravin  
I want to create an alert that pops up when the events match at least 500 times the same source IP address, same destination address and different destination ports in 1 minute.  The search I've come... See more...
I want to create an alert that pops up when the events match at least 500 times the same source IP address, same destination address and different destination ports in 1 minute.  The search I've come up with so far is as follows, although I'm not sure it's what I really need:    index=net-fw (src_ip=172.16.0.0/12 OR src_ip=10.0.0.0/8 OR src_ip=192.168.0.0/16) AND (dest_ip=172.16.0.0/12 OR dest_ip=10.0.0.0/8 OR dest_ip=192.168.0.0/16) action IN (allowed blocked) | stats first(_time) as date dc(dest_port) as num_dest_port by src_ip, dest_ip | where num_dest_port >500 | convert ctime(date) as fecha   I think what I am missing to achieve is "with the same source IP and the same destination IP in one minute". Could someone help me with this problem? Thanks in advance and best regards.
Hello, I am developing an order to replay an alert. I'm not sure if it's a good idea to use the same method as the one used in the previous version of the program. For the replay after having det... See more...
Hello, I am developing an order to replay an alert. I'm not sure if it's a good idea to use the same method as the one used in the previous version of the program. For the replay after having determined the rules which will have to sound I pass by kwargs_block = {'dispatch.earliest_time':earliest, "dispatch.latest_time":latest, "trigger_actions":self.trigger} job = search.dispatch(**kwargs_block) Here is an example of a replay started at 11:52, but its scheduled task starts at 30 of each hour so I would like to have 11:30. Do you have any idea how to set the date of indexation of the alert?  
The objective is to display multiple modifications done by the Submitter, And to show number of modifications, respective filenames and hash names. Example : Submitter John did 15 modifications, 3... See more...
The objective is to display multiple modifications done by the Submitter, And to show number of modifications, respective filenames and hash names. Example : Submitter John did 15 modifications, 3 modification to file app.exe 2 modifications to gap.exe 10 modifications to rap.exe. So the display should show 15 hash files . And my SPL does the job. The SPL ends with   | stats values(risk_country) AS extreme_risk_country, list(flagged_threat) AS flagged_threat, list(times_submitted) AS times_submitted, list(md5_count) AS unique_md5, list(meaningful_name) AS file_name, list(md5_value) as md5 by submitter_id   I do see the results, but I am unable to easily eye-ball where the hash file of one filename ends and other one begins. especially when there are lots of hashes.  Please check the attachment of the output I am getting. I want to easily see/distinguish where one set of hashes finish for a file and other one starts. I am looking for suggestions to achieve it in some way to look it visually separate . Thank you.  
Hello Splunkers I have a query regarding number of indexers or indexer clusters that can reside in a single site clustering suppose i have 400 indexers  is there a limit as such for the number of... See more...
Hello Splunkers I have a query regarding number of indexers or indexer clusters that can reside in a single site clustering suppose i have 400 indexers  is there a limit as such for the number of indexers in single site?? and another question is how many indexers can i place in a indexer cluster can it be more than 3?
Sumologic Query:   _source="VerizonCDN" | json field=_raw "path" | json field=_raw "client_ip" | json field=_raw "referer" | where %referer = "" | where %status_code = 200 | json field=_raw "u... See more...
Sumologic Query:   _source="VerizonCDN" | json field=_raw "path" | json field=_raw "client_ip" | json field=_raw "referer" | where %referer = "" | where %status_code = 200 | json field=_raw "user_agent" | count by %host,%path,%client_ip,%referer,%user_agent | where _count >= 100 | order by _count desc   and my conversion to splunk:   source="http:Emerson_P1CDN" AND status_code=200 AND referer="" | stats count by host,path,client_ip,referer,user_agent | where count >= 100 | sort - count   Do think I convert it right? because the result of splunk was different from sumologic.
Is there a way to configure an external repository as the default one. I noticed that when I create a new playbook or modify an existing playbook from another remote repository, it always gets saved ... See more...
Is there a way to configure an external repository as the default one. I noticed that when I create a new playbook or modify an existing playbook from another remote repository, it always gets saved into the local repository. How do I change that behaviour to make another repository as the default repository? I am on SOAR on prem 5.1.0.
I need to get count of events by day by hour or half-hour using a field in splunk log which is a string whose value is date - e.g. eventPublishTime: 2022-05-05T02:20:40.994Z I tried some variations... See more...
I need to get count of events by day by hour or half-hour using a field in splunk log which is a string whose value is date - e.g. eventPublishTime: 2022-05-05T02:20:40.994Z I tried some variations of below query, but it doesn't work.  How should I formulate my query? index=our-applications env=prod | eval publishTime=strptime(eventPublishTime, "%Y-%m-%dT%H:%M:%SZ") | convert timeformat="%H:%M" ctime(publishTime) AS PublishHrMin | convert timeformat="%Y-%m-%d" ctime(_time) AS ReceiptDate | stats c(ReceiptDate) AS ReceiptDateCount by ReceiptDate, parentEventName,, PublishHrMin Thank you