All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, so I´m using CheckPoint Firewall Block app to block some ip's.  If I try to block them manually like this: I'm getting this: The IP is being blocked. However, when I'm confi... See more...
Hi everyone, so I´m using CheckPoint Firewall Block app to block some ip's.  If I try to block them manually like this: I'm getting this: The IP is being blocked. However, when I'm configuring an alert condition to block it automatically : I get this: so, the IP is not being blocked.   Someone had the same problem and know how to solve it?
I am trying to write a splunk query to show what percentage of traffic is split between my on premise and cloud. My splunk index receives events from my legacy on premise and from my cloud solution. ... See more...
I am trying to write a splunk query to show what percentage of traffic is split between my on premise and cloud. My splunk index receives events from my legacy on premise and from my cloud solution. My cloud solution can scale as needed. My cloud solution has a healthcheck which just submits a request every minute to test the service.  From the service / event perspective there is no way to identify a real request from a healthcheck request When counting the percentage of traffic I want to exclude the health check requests that occur in the cloud however these look like normal requests in my data. So each host in the cloud will do a health check every minute. If i have 10 hosts in the cloud then over 10 minutes the total requests from the cloud need to have 100 substracted from them (10 hosts * 10 minutes) to give me the true count of real traffic the host processed. I then use this to calculate the percentage of traffic that was handled by on premise and the cloud solutions. However i cant just hardcode to subtract 100 because the number of hosts in the cloud can be dynamic and scaled as the demand is needed I have written the following splunk query which does what i want but doesnt scale well. For a 1 hour window its fine but for a 24 hour window i run into limitations (10,000 records) due to my user role in splunk. Is there a better or simpler way to write this query. My logic here was get all the events, create a new field for each minute, sort by host and minute then count events for each host and minute. for the events that were from a cloud server, subtract 1 from the count as that host will have a healthcheck, then get the max count from each host and minute and sum them up for the given enviroment (on prem or cloud). For here i just get the total (real) requests and calculate percentages     index="some_app_index" sourcetype="some_app_source" | eval eventMin=strftime(_time,"%M") | sort host, eventMin | streamstats count as hostcount reset_on_change=true by host, eventMin | eval hostcount=if(like(host,"cloudhost%"), hostcount-1, hostcount) | stats max(hostcount) as minutecount by eventMin, host, environment | stats sum(minutecount) as envtotal by environment | eventstats sum(envtotal) as total | eval traffic_split=round(envtotal/total*100, 2) | chart values(traffic_split) by environment     I feel I am probably missing a trick here or i have over complicated it after looking at the problem so long. 
Has anybody encountered a strange timeshift when applying a model to data   Model generation: Apply:
spath "log.message" | search "log.message"="REQ_TRACK_ID_MISSING*" OR "log.message" ="DESERIALIZATION_EXCEPTION*" OR "log.message" = "SERIALIZATION_EXCEPTION*".   Then from the results, I want to t... See more...
spath "log.message" | search "log.message"="REQ_TRACK_ID_MISSING*" OR "log.message" ="DESERIALIZATION_EXCEPTION*" OR "log.message" = "SERIALIZATION_EXCEPTION*".   Then from the results, I want to trim the asterisk part of string and print a table with count eg. log.message count REQ_TRACK_ID_MISSING 10 DESERIALIZATION_EXCEPTION 12 SERIALIZATION_EXCEPTION  5   I tried so many functions including replace, trim.. but I'm not able to formulate the results as shown above.  How can we achieve this?
We used the rest receivers simple api to send a body with some fields to index as a urlencoded form. Among these there is a field time field containing a timestamp. We configure the sourcetype as in... See more...
We used the rest receivers simple api to send a body with some fields to index as a urlencoded form. Among these there is a field time field containing a timestamp. We configure the sourcetype as in figure The problem is that Splunk is indexing when it receives the data ( as if datetime was CURRENT or it found no fields with time information) . An example of the data is   name=session_started&params=%7B%22request_id%22%3A+%220af2918a-0125-4573-9a27-bd1a6deef75d%22%2C+%22subject%22%3A+%22mmt-112%22%7D&time=2021-09-16T09%3A24%3A08.355865   we thought that the encoded data could be a problem so we changed the format of the body sent to splunk to json   {"name": "session_started", "params": "{\"request_id\": \"0af2918a-0125-4573-9a27-bd1a6deef75d\", \"subject\": \"mmt-112\"}", "time": "2021-09-16T09:24:08.355865"}   but the _time was again the time of recevieng. We tried several tweaks but none of them had success: we checked the format of the strptime ("% Y-% m-% dT% H:% M:% S.% 6N") and it is correct, e.g. "2021-08-31T18: 15: 20.268841" we tried to explicitly set the timezone (our times are in UTC) but nothing has changed  No error or warning in the internal log, even if we try to put a non-existent field instead of time. When searching using that sourcetype, the field time is parsed correctly, so the system is reading correctly. Any suggestion? What to do? What to try? A big thanks to the Splunk gurus that will help us!
I'm trying to get a large text file ingested using the HEC.  In my searches for the data, I see events that say "Message body length 3169085 greater than maximum allowed (2097152).  Where can I chang... See more...
I'm trying to get a large text file ingested using the HEC.  In my searches for the data, I see events that say "Message body length 3169085 greater than maximum allowed (2097152).  Where can I change that maximum?  I can't find any setting that says "Message body length", and I can't find any setting set to 2097152
Im trying to get a regex to work in splunk that works in regex101 Im using the below regex \b(a_msg)\b[^"]+"([^"]*)" this will extract everything after a_msg field and in between the "". I want to... See more...
Im trying to get a regex to work in splunk that works in regex101 Im using the below regex \b(a_msg)\b[^"]+"([^"]*)" this will extract everything after a_msg field and in between the "". I want to save this as a field extraction. Any idea how i can get this to work? example data {"log":"a_level=\"INFO\", a_time=\"2021-09-17 07:33:35,210\",  a_msg=\"CommissionRouteType: Client / clientId: 111/ planId: 111 / PolicyBusinessId: 111\","level":"info"}
Hello dears, How can i change search result limit ? At this moment, max 10K line shown..  
Hi, I want to set  up my 7-day trial Splunk Enterprise Security Sandbox. But when I click the start trial. I am getting this error. "We're sorry, an internal error was detected when creating the st... See more...
Hi, I want to set  up my 7-day trial Splunk Enterprise Security Sandbox. But when I click the start trial. I am getting this error. "We're sorry, an internal error was detected when creating the stack. Please try again later."   I am also receiving an email saying "We are sending this notification to inform you of a system error that prevented your Free Splunk Enterprise Security Trial from being created."   Is there anyone who have experienced this or have solution for this?   Thanks.
Hi, I want to copy some logs in one index to another index with the same host information. I use collect command to do this process. But when i copy, i see that all host information is the same and ... See more...
Hi, I want to copy some logs in one index to another index with the same host information. I use collect command to do this process. But when i copy, i see that all host information is the same and write search head ip address. So I cant search by looking host information. How can I do it? Can you help me?  Thanks. Best Regards
  We have a powershell script which collects all the data/information from all Domain controllers, the data/information is mainly about services (start/stop). The script executes every 2 hours and c... See more...
  We have a powershell script which collects all the data/information from all Domain controllers, the data/information is mainly about services (start/stop). The script executes every 2 hours and csv file is mailed to one group. We want to integrate that powershell script in splunk to create a dashboard so the monitoring team can monitor. All our DC's have splunk installed So lookin for some documentation or link which can be helpful to start the dashboard or any different way to integrate into splunk
Hi, Is there any method to get the list of all the universal forwarder that is being forwarded to Indexer? Regards, Rahul
We have configured zScaler logs to send logs to a syslog server, where rsyslog intercepts the feed and writes it to a file. HF is deployed to forward logs from file to Indexers. The setup works fine.... See more...
We have configured zScaler logs to send logs to a syslog server, where rsyslog intercepts the feed and writes it to a file. HF is deployed to forward logs from file to Indexers. The setup works fine. However, rsyslog upon receiving the logs does some funny things such as    2021-09-1704:12:27 reason=Allowed event_id=7008750744672403548 pr 2021-09-17T14:12:52.976915+10:00 10.24.12.5 otocol=HTTP_PROXY action=Allowed transactionsize=130 responsesize=65 requestsize=65 urlcategory=Corporate Marketing serverip=52.13.15.12 clienttranstime=0 requestmethod=CONNECTrefererURL="None" useragent=Unknown product=NSS location= As you can see the feed is broken in to two lines (log length is not causing the break) Is there an rsyslog config I can use to remediate this issue The zScaler format we have used is below %d{yy}-%02d{mth}-%02d{dd}%02d{hh}:%02d{mm}:%02d{ss}\treason=%s{reason}\tevent_id=%d{recordid}\tprotocol=%s{proto}\taction=%s{action}\ttransactionsize=%d{totalsize}\tresponsesize=%d{respsize}\trequestsize=%d{reqsize}\turlcategory=%s{urlcat}\tserverip=%s{sip}\tclienttranstime=%d{ctime}\trequestmethod=%s{reqmethod}\trefererURL="%s{ereferer}"\tuseragent=%s{ua}\tproduct=NSS\tlocation=%s{location}\tClientIP=%s{cip}\tstatus=%s{respcode}\tuser=%s{login}\turl="%s{eurl}"\tvendor=Zscaler\thostname=%s{ehost}\tclientpublicIP=%s{cintip}\tthreatcategory=%s{malwarecat}\tthreatname=%s{threatname}\tfiletype=%s{filetype}\tappname=%s{appname}\tpagerisk=%d{riskscore}\tdepartment=%s{dept}\turlsupercategory=%s{urlsupercat}\tappclass=%s{appclass}\tdlpengine=%s{dlpeng}\turlclass=%s{urlclass}\tthreatclass=%s{malwareclass}\tdlpdictionaries=%s{dlpdict}\tfileclass=%s{fileclass}\tbwthrottle=%s{bwthrottle}\tservertranstime=%d{stime}\tmd5=%s{bamd5}\tcontenttype=%s{contenttype}\ttrafficredirectmethod=%s{trafficredirectmethod}\trulelabel=%s{rulelabel}\truletype=%s{ruletype}\tmobappname=%s{mobappname}\tmobappcat=%s{mobappcat}\tmobdevtype=%s{mobdevtype}\tbwclassname=%s{bwclassname}\tbwrulename=%s{bwrulename}\tthrottlereqsize=%d{throttlereqsize}\tthrottlerespsize=%d{throttlerespsize}\tdeviceappversion=%s{deviceappversion}\tdevicemodel=%s{devicemodel}\tdevicemodel=%s{devicemodel}\tdevicename=%s{devicename}\tdevicename=%s{devicename}\tdeviceostype=%s{deviceostype}\tdeviceostype=%s{deviceostype}\tdeviceosversion=%s{deviceosversion}\tdeviceplatform=%s{deviceplatform}\tclientsslcipher=%s{clientsslcipher}\tclientsslsessreuse=%s{clientsslsessreuse}\tclienttlsversion=%s{clienttlsversion}\tserversslsessreuse=%s{serversslsessreuse}\tservertranstime=%d{stime}\tsrvcertchainvalpass=%s{srvcertchainvalpass}\tsrvcertvalidationtype=%s{srvcertvalidationtype}\tsrvcertvalidityperiod=%s{srvcertvalidityperiod}\tsrvocspresult=%s{srvocspresult}\tsrvsslcipher=%s{srvsslcipher}\tsrvtlsversion=%s{srvtlsversion}\tsrvwildcardcert=%s{srvwildcardcert}\tserversslsessreuse="%s{serversslsessreuse}"\tdlpidentifier="%d{dlpidentifier}"\tdlpmd5="%s{dlpmd5}"\tepochtime="%d{epochtime}"\tfilename="%s{filename}"\tfilesubtype="%s{filesubtype}"\tmodule="%s{module}"\tproductversion="%s{productversion}"\treqdatasize="%d{reqdatasize}"\treqhdrsize="%d{reqhdrsize}"\trespdatasize="%d{respdatasize}"\tresphdrsize="%d{resphdrsize}"\trespsize="%d{respsize}"\trespversion="%s{respversion}"\ttz="%s{tz}"\n   Thanks
Hi, Can someone help with the regex for below log entry, i need regex to extract the below fields in red. Thanks for your help INFO 1 --- [nio-8080-exec-2] XXXXXXXXXXX.SLALogging : Response --> { ... See more...
Hi, Can someone help with the regex for below log entry, i need regex to extract the below fields in red. Thanks for your help INFO 1 --- [nio-8080-exec-2] XXXXXXXXXXX.SLALogging : Response --> { "TestDetails" : [ { "TestIdentifiers" : { "TestIdentifier" : "xxxx", "TestBusiness" : 1 }, "borrower" : { "lastName" : "XXXXXX", "firstName" : "XXXXXX", "middleName" : "XX" }, "propertyAddress" : { "street1" : "XXXXXXXXX", "city" : "XXXXXX", "state" : "XX", "zip" : "XXXXXX", "country" : "XX" }, "TestLoanNumber" : "XXXXXXXXXX" "TestIdentifiers" : { "TestIdentifier" : "xxxx", "TestBusiness" : 1
I am new to Splunk Cloud but familiar with Splunk Enterprise. Just created an app from scratch manually on Splunk Cloud but I couldn't find a way to add a custom logo to the app. On Splunk Enterprise... See more...
I am new to Splunk Cloud but familiar with Splunk Enterprise. Just created an app from scratch manually on Splunk Cloud but I couldn't find a way to add a custom logo to the app. On Splunk Enterprise, I'd do this by adding the logo of specific resolutions  into the static directory of the app [$SPLUNK_HOME/etc/apps/appname/static/] . How do I achieve the same on Splunk Cloud? I obviously won't have SSH access to do that on splunk cloud instance so looking for an option on the Splunk Cloud UI to add a custom logo for my app, if there is any.
I created an accelerated search that is set for 7 days retention, runs every 30 minutes and searches 30 minutes back when it runs. I set it up in my dashboard to be used as a base search like so:   ... See more...
I created an accelerated search that is set for 7 days retention, runs every 30 minutes and searches 30 minutes back when it runs. I set it up in my dashboard to be used as a base search like so:   <search id="reportBase" ref="Accelerated report base"> <earliest>$set_time.earliest$</earliest> <latest>$set_time.latest$</latest> </search>   I then attempt to use it and modify the results with tokens like so:   <search base="reportBase"> <query>| search type IN ($types$) AND account IN ($accounts$) | stats count by hostname | sort -count </query>   The new search modifications with tokens works. However, no matter what I do, the time picker does not work. I only ever get back the last 30 minutes of data. I thought the 7 day retention meant I could get back any amount of time up to 7 days back quickly, not just the last 30 minutes. I tried to work around this by running this but the same thing happens:   | loadjob savedsearch="MyUser:search:Accelerated report base"   Then, I tried to use it in normal search and the time picker there also does nothing. It still only shows the last 30 minutes of data.   Am I missing something or can I not use accelerated reporting with a time picker?
I want  to view splunk dashboard  and receive splunk alert  on mobile device. my splunk enterprise instance (version 8.2.4) address is `http://192.168.1.100:8000`. now, I hava download `splunk mobil... See more...
I want  to view splunk dashboard  and receive splunk alert  on mobile device. my splunk enterprise instance (version 8.2.4) address is `http://192.168.1.100:8000`. now, I hava download `splunk mobile` app installed my Andriod device. but it let me enter the address ending in 'splunkcloud.com',  it is only support splunk cloud ?  any one kwon how to login my splunk enterprise on splunk mobile? and is ther a  tutorial? thank you for anyone !    
Hi all, I'm changing a field name in my index, so I'm trying to set up a field alias so both the old field name and new field name can be used in queries. This is for backward compatibility reasons, ... See more...
Hi all, I'm changing a field name in my index, so I'm trying to set up a field alias so both the old field name and new field name can be used in queries. This is for backward compatibility reasons, since a lot of existing dashboards/reports (many I do not own) refer to this field. So I set up the field alias, and I find that the field alias works for a normal search (non-tstats), but does not work for tstats.  Does that mean field aliases do not work for tstats at all?
As the title suggests, I am looking for an efficient way to consolidate multiple standalone Search Heads into single Search Head. How can I ensure all the required search artifacts get appropriately... See more...
As the title suggests, I am looking for an efficient way to consolidate multiple standalone Search Heads into single Search Head. How can I ensure all the required search artifacts get appropriately merged into single search head ?
I'm a  newbie on splunk, trying the basic thing but didn't find any solution. Reaching out if I get the direction/solution. I have the search results with userid using the query. Lookup file(master_... See more...
I'm a  newbie on splunk, trying the basic thing but didn't find any solution. Reaching out if I get the direction/solution. I have the search results with userid using the query. Lookup file(master_users) has all users with column name userid.  I want tonly those userids which are in lookup but not in my search result.  Tried multiple options but didn't find the right solution.