All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The first command of the map search needs to be a generating command, such as rest. Try adding the eval afterwards. <Base Search> | stats count min(_time) as firstTime max(_time) as lastTime values(... See more...
The first command of the map search needs to be a generating command, such as rest. Try adding the eval afterwards. <Base Search> | stats count min(_time) as firstTime max(_time) as lastTime values(user) as user by user, src_ip, activity, riskLevel |map maxsearches=100 search="| rest splunk_server=local /services/App/.../ ioc=\"$src_ip$\" | eval activity=\"$activity$\""
I will try both approaches today and see what happens.  Thanks for the suggestions!
It works Many thanks for your help !
We have a search where one of the fields from base search is passed onto a REST API using map command.    <Base Search> | stats count min(_time) as firstTime max(_time) as lastTime values(user) as ... See more...
We have a search where one of the fields from base search is passed onto a REST API using map command.    <Base Search> | stats count min(_time) as firstTime max(_time) as lastTime values(user) as user by user, src_ip, activity, riskLevel |map maxsearches=100 search="| rest splunk_server=local /services/App/.../ ioc="$src_ip$"     But after this search ,only the results returned by the REST API are shown. How can I include some of the fields from original search, e.g. user, activity so that they can later be used in a table? Tried adding the field using eval right before the REST call but that doesn't seem to be working.    eval activity=\"$activity$\" | rest     Also tried using "multireport" but only the first search is considered.    | multireport [ table user, src_ip, activity, riskLevel] [| map map maxsearches=100 search="| rest splunk_server=local /services/App/.../ ioc="$src_ip$"]     Is there a way to achieve this? API call itself returns a set of fields which I am extracting using spath but also want to keep some of the original ones for added context. Thanks, ~Abhi
Thank you this should work quite well for my needs.  
Hello to everyone! I have a Win server with Splunk UF installed that consumes MS Exchange logs This logs is stored in CSV format Splunk UF settings look like this: props.conf [exch_file_httppr... See more...
Hello to everyone! I have a Win server with Splunk UF installed that consumes MS Exchange logs This logs is stored in CSV format Splunk UF settings look like this: props.conf [exch_file_httpproxy-mapi] ANNOTATE_PUNCT = false BREAK_ONLY_BEFORE_DATE = true INDEXED_EXTRACTIONS = csv initCrcLength = 2735 HEADER_FIELD_LINE_NUMBER = 1 MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = DateTime TRANSFORMS-no_column_headers = no_column_headers transforms.conf [no_column_headers] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue   Thanks to the data quality report on the indexers layer, I found out that this source type has some timestamp issues I investigated this problem by executing a search on the searched layer and found surprising events breaking You can see an example in the attachment _raw data is OK and is not contain "unxepected" next-line characters What is wrong with my settings?
| rex "\w+\.(?<domaine_test>[\.\w-]+)" if the - is at the end of the character class [] it doesn't need to be escaped
Create a second dropdown with the dynamic search
map can be slow and limited - try something like this [| inputlookup testlookup | table index sourcetype] earliest=-2d@d latest=@d | eval day=if(_time < relative_time(now(), "-1d@d"), "Yesterday", "... See more...
map can be slow and limited - try something like this [| inputlookup testlookup | table index sourcetype] earliest=-2d@d latest=@d | eval day=if(_time < relative_time(now(), "-1d@d"), "Yesterday", "Today") | stats count by day index sourcetype | eval {day}=count | stats values(Today) as Today values(Yesterday) as Yesterday by index sourcetype | fillnull value=0 Yesterday Today | eval difference=abs(Yesterday - Today)
Hello everyone, Unfortunately, from the license master server I can not see anything from the dashboards of the license usage page. I have also tried with the query below:  index=_internal sourcety... See more...
Hello everyone, Unfortunately, from the license master server I can not see anything from the dashboards of the license usage page. I have also tried with the query below:  index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=*  But nothing, no results found.  Could you please help me? Thanks in advance.  
Hi, I am trying to deploy a new index to my indexer cluster via the Cluster Master and have followed the usual documentation on how to deploy via the Master-Apps Folder. I have done this before and ... See more...
Hi, I am trying to deploy a new index to my indexer cluster via the Cluster Master and have followed the usual documentation on how to deploy via the Master-Apps Folder. I have done this before and it has worked no problem but this time I have no idea why it is not working.  When I make the change to indexes.conf and run the command "splunk validate cluster-bundle" it gives me no errors and then brings me back to my CLI so I would presume it has validated it. Then I run the command "splunk show cluster-bundle-status" to check the bundle ID's they are still the same ID's on the active bundle and the latest bundle. Its as if Splunk is not recognising that a change has been made to the bundle and therefore cannot deploy it down to the indexers.   I ran the command "splunk apply cluster-bundle" and it gave me the below error. However when I checked the Splunkd.log on the CM and the Indexers there was no indication of a validation error, or any error for that case. Is there anything that I am missing here? Just cant work out why it is not recognising a change has been made to update the Bundle IDs to be pushed down.  Thanks  
Hi, Thank you for your response. I have some domain with "-" character, for exemple black-ice.com The result is "black". Is it possible to get all domain ?
But DropenDown 1 has Static values (Host Names) added hence if i add Dynamic based results in same dropdown1, values are duplicating.
Does your lookup identify which products are associated with with host? If so, you can dynamically populate the dropdown based on the results of a search which filters the products based on the hostn... See more...
Does your lookup identify which products are associated with with host? If so, you can dynamically populate the dropdown based on the results of a search which filters the products based on the hostname chosen.
I'm using a modified search from splunksearches.com to get the events from the past two days and returning the difference.  For all of the indexes and sourcetypes, if it exists, in the testlookup.  ... See more...
I'm using a modified search from splunksearches.com to get the events from the past two days and returning the difference.  For all of the indexes and sourcetypes, if it exists, in the testlookup.   While it works the index and sourcetype does not line up with the results.  Mapping I found handles this SPL a little different than a normal search, location of the stats command had to be moved to return the same results.   My question is there a way to modify the SPL so the index/sourcetype lines up with the results?  I'm pretty sure I'll eventually get it but already spent enough time on this.   thanks testlookup: has the columns index and sourcetype           | inputlookup testlookup |eval index1=index |eval sourcetype1=if (isnull(sourcetype),"","sourcetype="+sourcetype) |appendpipe [|map search="search index=$index1$ earliest=-48h latest=-24h | bin _time span=1d | eval window=\"Yesterday\"| stats count by _time window | append [|search index=$index1$ earliest=-24h | eval window=\"Today\"| bin _time span=1d | stats count by _time window | eval _time=(_time-(60*60*24))] | timechart span=1d sum(count) by window|eval difference = abs(Yesterday - Today)"]| table index1 sourcetype1 Yesterday Today difference         index1 sourcetype1 yesterday today  difference test1 st_test1           10 20 10            
Hi all, We have been facing some errors with Splunk indexers, where it says something like below. ``` Failed processing http input, token name=<HECtoken>, channel=n/a, source_IP=, reply=9, events_... See more...
Hi all, We have been facing some errors with Splunk indexers, where it says something like below. ``` Failed processing http input, token name=<HECtoken>, channel=n/a, source_IP=, reply=9, events_processed=62, http_input_body_size=47326, parsing_err="Server is busy" ``` And I found in some discussions that increasing queue sizes may help sometimes. We are indexing ~400GB per day and it makes sense to increase the queue sizes as default values might not be good enough in this case. However, the splunk docs doesnt have a detailed explanation of which queues can be set in server.conf and what are the proportions that we need consider. Can someone help with understanding this?
You may be able to adjust the props.conf settings to change how events are ingested.  Can you share raw events?
@ITWhisperer yes correct, but i have products for each Hostname which needs to be shown in drop down.    Hostname A = Product A, Product B, Product C etc.  Hostname B - Product X, product Y, Produ... See more...
@ITWhisperer yes correct, but i have products for each Hostname which needs to be shown in drop down.    Hostname A = Product A, Product B, Product C etc.  Hostname B - Product X, product Y, Product Z etc so depends on the Hostname, products needs to be populated in drop down
index=my_index source="/var/log/nginx/access.log" [| makeresults | addinfo | bin info_min_time as earliest span=15m | bin info_max_time as latest span=15m | table earliest latest]... See more...
index=my_index source="/var/log/nginx/access.log" [| makeresults | addinfo | bin info_min_time as earliest span=15m | bin info_max_time as latest span=15m | table earliest latest] | bin _time span=15m | stats avg(request_time) as Average_Request_Time by _time | streamstats count as weight | eval alert=if(Average_Request_Time>1,weight,0) | stats sum(alert) as alert | where alert==1
Thanks again @ITWhisperer.  Is there any way to restrict to the previous 2 times bins in the query as the cron scheduler doesn't fire exactly on the hour and getting 3 bins as you said.  Thinking of ... See more...
Thanks again @ITWhisperer.  Is there any way to restrict to the previous 2 times bins in the query as the cron scheduler doesn't fire exactly on the hour and getting 3 bins as you said.  Thinking of running at 1:05pm and if that could get the 12:30-45 & 12:45-1 bins, I think that would work well.