All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you. Maybe that could be used as a workaround. I guess I have to to the extraction change/enhancement myself then  
| eval id=random() | sort 0 id | streamstats count as id | eval group=((id - 1)%5) + 1 | stats list("Display Name") as "Display Name" by group
thank  you the second option works for what i need  
This doesn't trigger the alert either. My original alert (with traces.count) was triggered once during my tests, when I had 3 traces with errors in a short time period, but then it wasn't triggered a... See more...
This doesn't trigger the alert either. My original alert (with traces.count) was triggered once during my tests, when I had 3 traces with errors in a short time period, but then it wasn't triggered anymore. Is there maybe a better way to create an alert for such single events in Splunk? I think, the "static threshold" should be rather used for continuous metrics like CPU usage. But I didn't find any other option so far.  
i've looked at similar search online and have come up with this | table "Display Name" | eval "group" = (random() % 2) +1 | stats list("Display Name") as "Display Name" by "group" this is returni... See more...
i've looked at similar search online and have come up with this | table "Display Name" | eval "group" = (random() % 2) +1 | stats list("Display Name") as "Display Name" by "group" this is returning random names in two groups        group display Name 1 joe blogs 5 joe blogs 2 joe blogs  6 2 joe blogs 7 joe blogs 8 joe blogs  12   Any ideas how i can set the number returning for each group? maybe using the limit function???  
Ref Doc - Splunk Add-on for GCP Docs Currently, the Cloud Storage Bucket input doesn’t support pre-processing of data, such as untar/unzip/ungzip/etc. The data must be pre-processed and ready for in... See more...
Ref Doc - Splunk Add-on for GCP Docs Currently, the Cloud Storage Bucket input doesn’t support pre-processing of data, such as untar/unzip/ungzip/etc. The data must be pre-processed and ready for ingestion in a UTF-8 parseable format
If your values are in a multi-value field, you can do something like this | eval choice=mvindex(displayName, random()%200) If the names are in separate events, you could do something like this | e... See more...
If your values are in a multi-value field, you can do something like this | eval choice=mvindex(displayName, random()%200) If the names are in separate events, you could do something like this | eval id=random()%500 | sort 0 id | head 5
From what you are saying and reading between the lines between the lines, I am guessing that when All is chosen, the value of the token is set to "*". When this is used in a search e.g. field=$token$... See more...
From what you are saying and reading between the lines between the lines, I am guessing that when All is chosen, the value of the token is set to "*". When this is used in a search e.g. field=$token$, the "*" will equate to all non-null values, which is why your search is not dealing with "empty values". To get around this, you may have to change the way the token is set up and the way it is used. For example, if you change the value prefix to be <valuePrefix>field="</valuePrefix> and the value suffix to the <valueSuffix>"</valueSuffix>, then treat the choice of "All" to set an empty token, then your search can use $token$ instead of field=$token$ This is something that is easier to do in Classic/SimpleXML dashboards than Studio.
- Check in OS firewall the port is enabled. - Check correct sourcetype is configured  - Try to search the data in indexer itself to verify its not a connectivity issue between search head and indexer
Hi @dikaaditsa , which index did you configured in your input, that you're using in search? did you installed the Fortinat Fortigate Add-On for Splunk (https://splunkbase.splunk.com/app/2846) to ha... See more...
Hi @dikaaditsa , which index did you configured in your input, that you're using in search? did you installed the Fortinat Fortigate Add-On for Splunk (https://splunkbase.splunk.com/app/2846) to have a correct parsing? At least, it isn't a best practice to use Splunk to receive syslogs. The best approach is to configure a syslog receiver (e.g. rsyslog or syslog-ng) that writes logs on disk and then use Splunk to read these files. In this way, your syslog input will be active even if Splunk is down and in addition gives less overload on the system avoiding queues. Does your distributed search (you have one SH and one IDX) correctly run? in other words, are other searches correctly executed? Ciao. Giuseppe
Hi All,  I already configure ingestion log from fortigate using syslog , the log send using UDP by port 514.  I also setup data inputs in splunk enterprise to recieve the data from port 514.  Wh... See more...
Hi All,  I already configure ingestion log from fortigate using syslog , the log send using UDP by port 514.  I also setup data inputs in splunk enterprise to recieve the data from port 514.  When I perform tcp dump from splunk vm , the data successfully flowing from fortigate to splunk vm, but when I search the data from splunk web, there is no data appear.  Currently I ingest the data to 1 indexer, and search the data from another search head.  Please give me an advise to solve my issue.    Thankyou
After debugging in so many ways found out that a field im using in the query does not include empty values of the field while "All" is selected. Do you know how can i include empty values also when ... See more...
After debugging in so many ways found out that a field im using in the query does not include empty values of the field while "All" is selected. Do you know how can i include empty values also when "All" is selected in multiselect dropdown?
I have a search that links problem and problem task tables with a scenario that gives unexpected results My search brings back the latest ptasks against the problem but I have identified some task... See more...
I have a search that links problem and problem task tables with a scenario that gives unexpected results My search brings back the latest ptasks against the problem but I have identified some tasks that were closed as duplicate after the last update on the active tasks (`servicenow` sourcetype="problem" latest=@mon) OR (`servicenow` sourcetype="problem_task" latest=@mon dv_u_review_type="On Hold") | eval problem=if(sourcetype="problem",number,dv_problem) | stats values(eval(if(sourcetype="problem_task",number,null()))) as number, latest(eval(if(sourcetype="problem_task",active,null()))) as task_active, latest(eval(if(sourcetype="problem_task", dv_u_review_type,null()))) as dv_u_review_type, latest(eval(if(sourcetype="problem_task",dv_due_date,null()))) as task_due, latest(eval(if(sourcetype="problem",dv_opened_at,null()))) as prb_opened, latest(eval(if(sourcetype="problem",dv_active,null()))) as prb_active by problem | fields problem, number, task_active, dv_u_review_type, task_due, prb_opened, prb_active | where problem!="" Is it possible to mark an event that is closed as out of scope then disclude all the events of the same number?
Hi, I know the post was in 2019, but for the next one who fall on this topic, I share some tips about that. Use double stats to avoid mvexpand : index=index1 | some crazy stuff | fields source1 ... See more...
Hi, I know the post was in 2019, but for the next one who fall on this topic, I share some tips about that. Use double stats to avoid mvexpand : index=index1 | some crazy stuff | fields source1 host | append [search index=index2 | some more crazy struff | fields source2 host] | stats values(source1) as source1, values(source2) as source2 by host ```add this next line if you want source1 or source2 are null : |fillnull value="N/A" source1 source2 ``` |stats c by host source1 source2  Hope this will be helpfull
Hi @hazem , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @rvany , I don't know if this can solve your issue, but I found that using XML rendering not all the fields are correctly displayed, try using xmlRender=0 in inputs.conf. Ciao. Giuseppe
Morning All  I am trying to work out how to use splunk spl to pick random names from a list i have 1 field called 'displayName'. there are over 200 entries and i'd like to use Splunk to pick 5 rand... See more...
Morning All  I am trying to work out how to use splunk spl to pick random names from a list i have 1 field called 'displayName'. there are over 200 entries and i'd like to use Splunk to pick 5 random names    appreciate help in this Paula    
Splunk Enterprise: 9.0.3 (Linux) Splunk Add-on for Microsoft Windows: 8.9.0 Data source: Windows Server 2016 Data format: XML When extracting EventIDs from XML data the EventID is _not_ extracted... See more...
Splunk Enterprise: 9.0.3 (Linux) Splunk Add-on for Microsoft Windows: 8.9.0 Data source: Windows Server 2016 Data format: XML When extracting EventIDs from XML data the EventID is _not_ extracted if there's a "Qualifiers" attribute. Only the "Qualifiers" field is then extracted - see screenshot. Is this intentionally?
  | rex max_match=0 ": \[(?<id>\w+)\]"  
  | rex max_match=0 "\[KOREASBC1\](?<message>[^;]+)"