All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a chart that displays the Active and Inactive Users for today. I would like to convert this into a timechart which shows data for the selected Period in the Time picker.    index=prod ... See more...
I have a chart that displays the Active and Inactive Users for today. I would like to convert this into a timechart which shows data for the selected Period in the Time picker.    index=prod | stats latest(_time) as last_seen by customerId | eval status = if(last_seen > relative_time(now(), "-30d@d"),"Active","Inactive") | chart count by status | rename count as "User Count"   Please suggest how it could be approached. @FrankVl : You have helped me with the original query. Could you please guide me here as well?
    I want only first panel while selecting host from the dropdown and hide other panel while selecting. How can i do this through xml and what the changes i need to be done ?  
Hi all,   still learning Splunk here and we just started ingesting Fortigate firewall logs. After a recent FortiGate update the logs are coming in all with a timestamp of 5am. The logs are coming i... See more...
Hi all,   still learning Splunk here and we just started ingesting Fortigate firewall logs. After a recent FortiGate update the logs are coming in all with a timestamp of 5am. The logs are coming in via syslog to a HF. I have tried using  TIME_FORMAT = date=%Y-%m-%d time=%H:%M:%S TIME_PREFIX = ^\s*<\d{3}> which was suggested in another fortigate ticket without any luck. Any help is appreciated.  11/6/20 5:00:00.000 AM   <189>logver=602055878 timestamp=1604673601 tz="UTC-5:00" devname="RNHN-FW1800F" devid="FG181FTK20900192" vd="CORP" date=2020-11-06 time=09:40:01 logid="0001000014" type="traffic" subtype="local" level="notice" eventtime=1604673601539310045 tz="-0500" srcip=87.251.80.10 srcport=53887 srcintf="FairPoint_WAN_B" srcintfrole="wan" dstip=71.181.10.217 dstport=2256 dstintf="unknown0" dstintfrole="undefined" sessionid=45763314 proto=6 action="deny" policyid=0 policytype="local-in-policy" service="tcp/2256" dstcountry="United States" srccountry="Russian Federation" trandisp="noop" app="tcp/2256" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 appcat="unscanned" crscore=5 craction=262144 crlevel="low" mastersrcmac="02:00:40:05:26:15" srcmac="02:00:40:05:26:15" srcserver=1
I have a dashboard to show disk read/write data for a server on a area chart. I have wrote below SPL for the same  host="Server1" index="performance_data" instance=*: source="PerfmonMk:LogicalDisk" ... See more...
I have a dashboard to show disk read/write data for a server on a area chart. I have wrote below SPL for the same  host="Server1" index="performance_data" instance=*: source="PerfmonMk:LogicalDisk" sourcetype="PerfmonMk:LogicalDisk" | eval instance = substr(instance, 1, len(instance)-1) | eval Host_Instance = 'host'."-".'instance' | timechart eval(round(avg('Avg._Disk_Queue_Length'),2)) AS "Avg. Disk Queue Length" BY Host_Instance limit=0 When I run this SPL for a weeks time and I have the disk data collected at every 30 s interval, the dashboard takes 10-15 mins to load.  My Splunk instance is in Splunk managed cloud. Still it loads very slow. Is there any issue with the SPL or I have to use some optimization technique here to improve performance?  
Need to know what was the last time a domain AD account “username” was logged into and from what server/machine please?
Hopefully, I can explain this to where it makes sense. I have a forum where I use a TEXT input to generate a token to search for one jobname in the data. This works perfectly for one jobname. I would... See more...
Hopefully, I can explain this to where it makes sense. I have a forum where I use a TEXT input to generate a token to search for one jobname in the data. This works perfectly for one jobname. I would like to be able to enter one or many different values. Normally I would use a multi-select, but there are over 65,000 possible jobnames. My JobName text input will generate token "tok_Job_Name", then in the search I have | search Job_Name="$tok_Job_Name$" I like to be able to enter one job for example JOBAA, or many JOBAA,JOBBB,JOBCC, etc and have the search return all jobs with the given jobnames. Is there a way to manipulate the token value within the search or create a new token after some regex? My thought was, I could use a simple regex command like | rex field=$tok_Job_Name$ mode=sed "s/,/*","*/g", then the token would be formatted correctly to us a search IN. Hopefully, this makes sense and someone may be able to offer an idea. Thank you in advance for any and all help is given!
I am trying to pull data from an  Oracle DB that has very high volume of data, like hundreds of data per second.  But the input is skipping some records intermittently.  I am pulling data using belo... See more...
I am trying to pull data from an  Oracle DB that has very high volume of data, like hundreds of data per second.  But the input is skipping some records intermittently.  I am pulling data using below configuration : Query Timeout:120 Max Rows to Retrieve : 0 Fetch Size : 10000 Execution Frequency : */10 * * * *   I have tried using two inputs using two different rising column ( one of them is an Unique ID field, another is a time field), but both the inputs are skipping the same records.  However, we are not finding any abnormalities with those records while checking from the DB.    Any idea what may be the cause of the issue or any suggestion on how to triage the issue?       
Hi I'm trying to draw a distribution histogram of the duration to complete a specific action. The search is:   index=index1 STATUS=Executed | eval EXECUTED_DATE_e = strptime(EXECUTED_DATE, "%Y-%m-... See more...
Hi I'm trying to draw a distribution histogram of the duration to complete a specific action. The search is:   index=index1 STATUS=Executed | eval EXECUTED_DATE_e = strptime(EXECUTED_DATE, "%Y-%m-%d %H:%M:%S.%1N") | eval START_DATE_e = strptime(START_DATE, "%Y-%m-%d %H:%M:%S.%1N") | eval TTR = EXECUTED_DATE_e - START_DATE_e | bin bins=100 TTR | stats count by TTR   This produces the correct bins and counts, but the order is alphanumeric, which places 1000000-2000000 directly after 100000-200000, instead of 200000-300000. If I plot this result the bins are in the wrong location, and I cannot clearly interpret the distribution histogram. -100000-0 27531 0-100000 151267 100000-200000 14649 1000000-1100000 361 1100000-1200000 371 1200000-1300000 197 1300000-1400000 119 1400000-1500000 70 1500000-1600000 64 1600000-1700000 111 1700000-1800000 76 1800000-1900000 69 1900000-2000000 27 200000-300000 8390 2000000-2100000 20 2100000-2200000 22 2200000-2300000 12 2300000-2400000 10 2400000-2500000 8
I have a report scheduled, and the schedule successed but I have always the message: There are no results because the first scheduled run of the report has not completed." and the report never is s... See more...
I have a report scheduled, and the schedule successed but I have always the message: There are no results because the first scheduled run of the report has not completed." and the report never is showing.  The splunk version is 7.2.3
I'm on host "capture", stream server is "streamserver" Downloaded stream from web interface. While starting stream I get. 2020-11-03 15:20:00 INFO [140374280497024] (main.cpp:1120) stream.main - s... See more...
I'm on host "capture", stream server is "streamserver" Downloaded stream from web interface. While starting stream I get. 2020-11-03 15:20:00 INFO [140374280497024] (main.cpp:1120) stream.main - streamfwd has started successfully (version 7.1.3 build 35) 2020-11-03 15:20:00 INFO [140374280497024] (main.cpp:1122) stream.main - web interface listening on port 8889 2020-11-03 15:20:05 ERROR [140374279440128] (CaptureServer.cpp:2210) stream.CaptureServer - Unable to ping server (a3b2ebe6-9466-4e36-8119-2c8ff3151d4b): /en-US/custom/splunk_app_stream/ping/ status=401 2020-11-03 15:20:10 ERROR [140374022964992] (CaptureServer.cpp:2210) stream.CaptureServer - Unable to ping server (a3b2ebe6-9466-4e36-8119-2c8ff3151d4b): /en-US/custom/splunk_app_stream/ping/ status=401 2020-11-03 15:20:11 ERROR [140374022964992] (CaptureServer.cpp:2298) stream.CaptureServer - /en-US/custom/splunk_app_stream/indexers?streamForwarderId=xxx status=401 /opt/splunkforwarder/etc/apps/Splunk_TA_stream/local/streamfwd.conf [streamfwd] port = 8889 ipAddr = 127.0.0.1 netflowReceiver.0.ip = xxxx netflowReceiver.0.port = 9996 netflowReceiver.0.decoder = netflow netflowReceiver.0.protocol = udp cat /opt/streamfwd/local/streamfwd.conf [streamfwd] httpEventCollectorToken = xxxx netflowReceiver.0.port = 9996 netflowReceiver.0.protocol = udp netflowReceiver.0.ip = xxxx netflowReceiver.0.decoder = netflow cat /opt/streamfwd/local/inputs.conf [streamfwd://streamfwd] splunk_stream_app_location = https://streamserver:8000/en-US/custom/splunk_app_stream/ stream_forwarder_id = infra_netflow cat /opt/splunkforwarder/etc/apps/Splunk_TA_stream/local/inputs.conf [streamfwd://streamfwd] splunk_stream_app_location = https://streamserver:8000/en-US/custom/splunk_app_stream/ stream_forwarder_id = disabled = 0 index = netflow curl -k "https://streamserver:8000/en-us/custom/splunk_app_stream/ping/" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unauthorized</msg> </messages> </response>
can we keep bar chart in stacked from even after using predict command? My Case: In my case after using predict command automatically visualization shows in non stack format. Even I select stacked... See more...
can we keep bar chart in stacked from even after using predict command? My Case: In my case after using predict command automatically visualization shows in non stack format. Even I select stacked visualization Can you please suggest the bar graph should be in stacked format even I use predict command Thank You in advance Renuka
Getting this error while loading the data in splunk enterprise, tried uninstalling & installing again. Worked for the 1st time post install & same issue again for subsequent loads. Any suggestions ar... See more...
Getting this error while loading the data in splunk enterprise, tried uninstalling & installing again. Worked for the 1st time post install & same issue again for subsequent loads. Any suggestions are highly appreciated. HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/indexing/preview?output_mode=json&amp;props.NO_BINARY_CHECK=1&amp;input.path=factbook.csv (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Tunnel or SSL Forbidden')))
Hello, I am looking to onboard Symantec email security.cloud  data to Splunk cloud, but the add-on seems not compatible/available on Splunk Cloud ( https://splunkbase.splunk.com/app/3830/ ), could s... See more...
Hello, I am looking to onboard Symantec email security.cloud  data to Splunk cloud, but the add-on seems not compatible/available on Splunk Cloud ( https://splunkbase.splunk.com/app/3830/ ), could someone please advise if there is another way ?  I suppose using an on-prem HF for the add-on and forward data Splunk could work, although trying to avoid on-prem components if it is possible to onboard directly from IDM. Thanks in advance. Chaith
What is renewal policy after 36 months for APM Pro and Peak
Hi Splunk Community, Sorry if my question is basic but I am new to ML usage in Splunk. I saw an inbuilt example of Splunk ML model which is to predict the presence of malware. So my idea was to use ... See more...
Hi Splunk Community, Sorry if my question is basic but I am new to ML usage in Splunk. I saw an inbuilt example of Splunk ML model which is to predict the presence of malware. So my idea was to use the the same model which contains email data with status of "malicious" or "not".  I used around 2000 malicious emails and 2500 thousand non malicious emails. When I am using that CSV file in Predict Categorical Fields using any ML model like logistic-categorical or random-forest I am getting error of "No valid fields to fit or apply mode to. Here I am trying to predict the status field. SPL Query: | inputlookup email_data.csv | head 5000| fit LogisticRegression fit_intercept=true "status" from "fromAddress" "messageid" "senderIP" "senderdomain" "subject" into "example_malware" Any help of if anyone implemented this and can suggest me with example would be really helpful.
Dear Splunkers, Sorry about this, but I never did such thing before... My Splunk is in EU and now I added PaloAlto firewall logs (collected by a Syslog and UF pushing them to Splunk) from AUS. The... See more...
Dear Splunkers, Sorry about this, but I never did such thing before... My Splunk is in EU and now I added PaloAlto firewall logs (collected by a Syslog and UF pushing them to Splunk) from AUS. The timestamping is wrong. First of all the today's events (11/06) are indexed on11th of Jun (06/11).  On the top, it is indexed two hours ahead than the current time. now the events look like this : 11/06/2020 13:45:43.000   06-11-2020 21:45:43 User.Info 10.180.160.41 Nov 6 21:45:43 Firewall.device.name 1, ..........................................................   I'm using the Palo Alto add-on default for the source type, just the time zone changed to Sydney.  (Timestamp prefix : ^(?:[^,]*,){5}   ;   Lookahead 100) Could you please advise what I should do? (what will happen if I  will have the same source type logs to the same index but from a different timezone? )  Regards, Norbert
The Full error is as follows: Health Check: The list of indexes to be searched by default by the admin role on Splunk server "xxx" includes all non-internal indexes which might cause performance pro... See more...
The Full error is as follows: Health Check: The list of indexes to be searched by default by the admin role on Splunk server "xxx" includes all non-internal indexes which might cause performance problems Is there a way to check which alert, dashboard or report is causing the issue ? I know the specific time that this notification triggers but I do not know how much of use that would be. I have went through this article, but it doesnt really answer my question but only to shut off the alerts: https://docs.splunk.com/Documentation/ES/6.3.0/Admin/Troubleshootdefaultadminsearches Best Regards,
i want to set token  value ="*" on button click/link click on the dashboard. is it possible to achieve without JS? @kamlesh_vaghela  any inputs pls? 
Hello Editing a template on splunk_TA_jmx web interface is practically impossible because the input field is only 30 characters wide. I would prefer to directly edit the contents of the jmx_templat... See more...
Hello Editing a template on splunk_TA_jmx web interface is practically impossible because the input field is only 30 characters wide. I would prefer to directly edit the contents of the jmx_templates.conf file but the "content" parameter is encoded there. Do you know how to decode then encode this parameter?   Thanks Christian  
Hi, I would like to edit the default dashboards in Enterprise Security ( Security Domains--> Access, Endpoint, Network, identity, Security Intelligence), please let me know what permissions are need... See more...
Hi, I would like to edit the default dashboards in Enterprise Security ( Security Domains--> Access, Endpoint, Network, identity, Security Intelligence), please let me know what permissions are needed.   Thanks, Guru