All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am having trouble with a panel staying hidden when the search above shows no results. I would like to create a ticker of sorts that will display the result from a search. If something has... See more...
Hello, I am having trouble with a panel staying hidden when the search above shows no results. I would like to create a ticker of sorts that will display the result from a search. If something has happened in the last 48 hrs it will show, if not it will be hidden. I was told to try the below from a different source but it's not quite working to hide when there are no results. The search itself works, but the ticker is showing at all times. <search> <query>           search that will return one result ( a string) or no results </query> <earliest>-48h</earliest> <finalized> <condition match=" 'job.resultCount' != 0"> <set token="ticker">$result.ticker$</set> <set token="ticker_result">$result.ticker$</set> </condition> <condition match=" 'job.resultCount' = 0"> <unset token="ticker"></unset> <unset token="ticker_result"></unset> </condition> </finalized> </search> <row> <panel depends="$ticker$"> <html> <style> #marquee { style: choices } </style> <marquee scrollamount="19" id="marquee">ALERT - $ticker_result$</marquee> </html> </panel> </row>
Hi guys, i've been trying to configure the Splunk app for windows infrastructure and for that I've previously installed the addon you see on the subject. But when I'm running the data check I see tha... See more...
Hi guys, i've been trying to configure the Splunk app for windows infrastructure and for that I've previously installed the addon you see on the subject. But when I'm running the data check I see that the ActiveDirectory* sourcetype that I need to see , is not returning any events. See below:   What could this be? Thanks in advance!  
I stumbled upon the documentation for SPL2 for splunk cloud. Are there any plans for SPL2 for Splunk On-premise?    https://docs.splunk.com/Documentation/SCS/current/SearchReference/Differencesbetw... See more...
I stumbled upon the documentation for SPL2 for splunk cloud. Are there any plans for SPL2 for Splunk On-premise?    https://docs.splunk.com/Documentation/SCS/current/SearchReference/DifferencesbetweenSPLandSPL2
Hello folks,  I want to use the following add on with the splunk cloud:https://splunkbase.splunk.com/app/4109/#/details  However I can't get in touch with the developer, is there anyone that would ... See more...
Hello folks,  I want to use the following add on with the splunk cloud:https://splunkbase.splunk.com/app/4109/#/details  However I can't get in touch with the developer, is there anyone that would have any idea if I could load this up in Splunk Cloud if it would work?    Thanks! 
Hi, I would like to know if there is some way to create a query where I can get more than 10.000 results when I used the join command.  Into the limits.conf file, I can´t edit the limit for more tha... See more...
Hi, I would like to know if there is some way to create a query where I can get more than 10.000 results when I used the join command.  Into the limits.conf file, I can´t edit the limit for more than 10.500 results.  Do you know some way to do this  ?, at today I have a file with more than 700.000 entries. 
Ill start off i am newer to splunk....    I am using the following search  index=server source="WinEvent" EventCode=1234 OR EventCode=5678 | eval locked_account_name=mvindex(Account_Name, 1) | e... See more...
Ill start off i am newer to splunk....    I am using the following search  index=server source="WinEvent" EventCode=1234 OR EventCode=5678 | eval locked_account_name=mvindex(Account_Name, 1) | eval account_that_unlokcedit=mvindex(Security_ID, 0) | transaction startswith="locked out" endswith="unlocked" | stats sum(duration) as duration by locked_account_name account_that_unlokcedit | eval min=duration/60 | eval min=round(min,2) | search account_that_unlokcedit=*APP1234* OR account_that_unlokcedit=*z_xxx* OR account_that_unlokcedit=*APP56789*   I need to chang the min column to minutes..... then i need to get the average of that column(minutes) and put the average in its own column  
For some background on how the data is structured, it is JSON data that I have ingested a specific way, using a regex line break that works the best for most of the type of metrics I'm trying to find... See more...
For some background on how the data is structured, it is JSON data that I have ingested a specific way, using a regex line break that works the best for most of the type of metrics I'm trying to find so I cant split the events up differently. Within the JSON there are separate groupings of data that report on specific modules and whether they "pass" or "fail", this is the information I'm trying to pull out. I would like to pull it into a chart to show "this many of "X" passed and this many of "X" failed".  Below is a snippet of the JSON data to show what it looks like. there are over 100 of these grouping within a single JSON. I am trying to pull the "Severity" and "CurrentStatus" values to essentially mark the grouping as "Level2-Passed" and then do this for each grouping that is similar, and for "Level1-Passed" "Level1-Failed" and so on. I am able to get the number of "Severity" values and "Current Status" values but have not been able to correlate the two together.   { "ID": "", "Title": "", "Rule": "", "Severity": "LEVEL II", "Version": "", "Description": "", "Location": "", "KeyName": "", "KeyType": "", "ExpectedValue": "", "OriginalValue": "", "CurrentValue": "", "Options": "", "Comments": "", "ActionTaken": "", "CurrentStatus": "PASSED", "Conflict": "" },     I have tried using a combo of the following to split the single event into multiple events but have not found a search command that works with this to get the data in a format that will work.   | eval EventGroups=split(_raw,"},")     I have also tried using a number of combinations of rex and regex search commands similar to the below but have not been able to get them to properly Example:   | rex field=EventGroups "(?<LevelI>(.Severity.*?)(LEVEL I\b))" | rex field=EventGroups "(?<LevelII>(.Severity.*?)(LEVEL II\b))" | rex field=EventGroups "(?<LevelIII>(.Severity.*?)(LEVEL III\b))" | rex field=EventGroups "(?<CurrentStatusPASSED>(.CurrentStatus.*?)(PASSED.))" | rex field=EventGroups "(?<CurrentStatusFAILED>(.CurrentStatus.*?)(\SFAILED.))"   And...   | rex max_match=0 field=EventGroups "(?<SevCATI_F>(.*?.*\n.*\n.*\n.*\n.*?(\bLEVEL I\b).*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*\n.*?\S(\bFAILED\b).*\n.*\n.*?.*))"     Any help or advice with this would be greatly appreciated.
I'm looking for the impersonation logs and how to configure that via the Add-On. Reviewing the documentation I don't see which table to get those logs. Has anyone else completed this?
Hi, I need help on How to set up or enable secured syslog in Splunk?  
I need to allow the Splunk ES SH to access the Internet to allow the Splunk ES Use Cases / Content updates to be updated and kept up to date.   Does anyone know if the URL(s) and port(s) that the S... See more...
I need to allow the Splunk ES SH to access the Internet to allow the Splunk ES Use Cases / Content updates to be updated and kept up to date.   Does anyone know if the URL(s) and port(s) that the Splunk ES Search head needs to access?  Same question goes on Threat Intel downloads. Are the URLs for the free intel feeds documented anywhere? Thank you
How about for Service Availability? Is there support for inspecting the Response Header to determine availability of web page? In our environment checking the Response Content and/or HTTP status code... See more...
How about for Service Availability? Is there support for inspecting the Response Header to determine availability of web page? In our environment checking the Response Content and/or HTTP status code (200) is not enough to accurately gauge availability.  ^ Edited by @Ryan.Paredez Note: This conversation was split off from this existing one: https://community.appdynamics.com/t5/NET-Agent-Installation/How-do-I-capture-HTTP-Header-responses-for-a-web-page/m-p/36566#M1053
I am writing a query to look for rises in error messages over the past hour.  It looks in 15 minute chunks from 0 to 60 minutes ago.  Rows where there are 0 error messages are missing from the table,... See more...
I am writing a query to look for rises in error messages over the past hour.  It looks in 15 minute chunks from 0 to 60 minutes ago.  Rows where there are 0 error messages are missing from the table, but i need to keep them there so when i run a median over the last 3 time bins, it includes the 0s. Each API has their own error messages when it fails, and not every failure occurs in every 15 minute block of time for their API. So far I have this, in run-anywhere spl, but it's not correct   | makeresults 1 | eval api="api1", errorMsg="msg1", Minute=0, Traffic="1234", Failures="5" | append [| makeresults 1 | eval api="api1", errorMsg="msg2", Minute=15, Traffic="1786", Failures="2"] | append [| makeresults 1 | eval api="api1", errorMsg="msg2", Minute=30, Traffic="1842", Failures="1"] | append [| makeresults 1 | eval api="api1", errorMsg="msg2", Minute=45, Traffic="1619", Failures="7"] | append [| makeresults 1 | eval api="api1", errorMsg="msg3", Minute=0, Traffic="1234", Failures="15"] | append [| makeresults 1 | eval api="api1", errorMsg="msg3", Minute=45, Traffic="1619", Failures="12"] | append [| makeresults 1 | eval api="api2", errorMsg="msg10", Minute=15, Traffic="7856", Failures="110"] | fields api, errorMsg, Minute, Traffic, Failures | appendpipe [| stats count by api, errorMsg | eval Minute=split("0,15,30,45", ",") | mvexpand apiErrorMsg | mvexpand Minute] | stats sum(Traffic) as Traffic, sum(Failures) as Failures by api, errorMsg, Minute | fillnull value=0 Failures     Table is too large to show, but it doesn't carry with it the total traffic values for each 15 minute bin. I looked at the following solutions, but they are each different enough that their solutions only partially worked as I have 2 group by fields and the APIs and Error Messages are not yet known until the query runs. https://community.splunk.com/t5/Dashboards-Visualizations/how-to-insert-row-on-zero-count-and-still-use-group-by-multiple/m-p/144172#M8722 https://community.splunk.com/t5/Splunk-Search/Any-way-to-return-zero-result-count-stats-of-a-field-such-as-the/td-p/92149 https://community.splunk.com/t5/Security/Search-events-against-a-lookup-table-and-show-matching-count/td-p/47408 https://community.splunk.com/t5/Dashboards-Visualizations/Conditionally-Append-Rows-to-Stats-Table/m-p/121235#M7040 The only problems is the Total Traffic field (total of all calls regardless whether it erred) is missing from many rows. What can I do after the makeresults that will fill this table out correctly?  
Hi, I am working on a use case where in I have successfully acquired required forecasting results. I am struggling on modifying the colors for the lines to differentiate well between the actual data... See more...
Hi, I am working on a use case where in I have successfully acquired required forecasting results. I am struggling on modifying the colors for the lines to differentiate well between the actual data and the forecasted data. Can lead towards how this issue could be addressed would be very helpful. The viz type is "Splunk_ML_Toolkit.ForecastViz"    
Hello all, How would I join bellow results by common field -> host? Same index is used. I was able to create advanced and big dashboards / searches, but cannot use join or other stuff lately, becau... See more...
Hello all, How would I join bellow results by common field -> host? Same index is used. I was able to create advanced and big dashboards / searches, but cannot use join or other stuff lately, because I'm not working with Splunk daily, and forgot almost everything.. My original intention was to add BuildNumber into this search: sourcetype="WinHostMon" Type="Disk" Name="C:" host="*" NOT host="*dc.dhl.com*" NOT host="*czchows*" NOT host="*MYKULWS*" NOT host="*czstlws*" NOT host="*usqasws*" | dedup host | eval FreeSpaceKB = round((FreeSpaceKB/1024/1024),2) | eval TotalSpaceKB = round ((TotalSpaceKB/1024/1024),2)  | eval percentage=(FreeSpaceKB/TotalSpaceKB*100) | join host  [ search sourcetype="xendesktop:7:machine" | eval host=MachineName ] | table MachineName DesktopGroupName FreeSpaceKB TotalSpaceKB percentage | rename FreeSpaceKB AS "Free Space GB" MachineName AS Machine TotalSpaceKB AS "Total Space GB" percentage AS "% Free Space" | sort  Machine
Hi , Do we have any command in splunk which does similar functionality like "Collect " command. Can someone suggest on this?
Hi For a given index with retention of 91 days configured, we find some hosts having events for the full 91 days. Some other hosts only retain 78 days and some other host even less than that. This... See more...
Hi For a given index with retention of 91 days configured, we find some hosts having events for the full 91 days. Some other hosts only retain 78 days and some other host even less than that. This is a indexer-cluster-environment having 4 peers and a distributed indexer.conf. Index: [ix_windows_security_logs] coldPath = $SPLUNK_DB/ix_windows_security_logs/colddb enableTsidxReduction = 0 rtRouterThreads = suspendHotRollByDeleteQuery = 0 enableOnlineBucketRepair = 1 frozenTimePeriodInSecs = 7862400 archiver.enableDataArchive = 0 bucketRebuildMemoryHint = 0 minHotIdleSecsBeforeForceRoll = 0 syncMeta = 1 maxTotalDataSizeMB = 716800 compressRawdata = 1 selfStorageThreads = homePath = $SPLUNK_DB/ix_windows_security_logs/db rtRouterQueueSize = thawedPath = $SPLUNK_DB/ix_windows_security_logs/thaweddb tsidxWritingLevel = The index is not full at all: ix_windows_security_logs Curent Size: 1 MB Max Size: 700 GB Nor is the device: /dev/mapper/vgappl-lvsplunk 3.1T 51G 2.9T 2% /opt/splunk What could be the issue here? Thank you
I've been on the struggle bus with WinEventLog blacklist entries this week and stumbled upon the new xmlRegex modifier.    Anyone know which version of the Splunk Universal forwarder introduced this ... See more...
I've been on the struggle bus with WinEventLog blacklist entries this week and stumbled upon the new xmlRegex modifier.    Anyone know which version of the Splunk Universal forwarder introduced this capability? Note:  The Splunk docs surrounding advanced white/blacklisting of WinEventLog inputs have improved significantly!
Creating the generic S3 connection I get this in logs:   "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_common.py", line 63, in _create_s3_connection if isins... See more...
Creating the generic S3 connection I get this in logs:   "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_common.py", line 63, in _create_s3_connection if isinstance(conn.host, unicode): AttributeError: 'NoneType' object has no attribute 'host'   Any idea why I am getting this? Heres my inputs:   [aws_s3://somename_S3] start_by_shell = false aws_account = XXXXXXXXXXXXXX sourcetype = aws:s3 initial_scan_datetime = default max_items = 100000 max_retries = 3 polling_interval = interval = 30 recursion_depth = -1 character_set = auto is_secure = True host_name = s3-website-us-gov-west-1.amazonaws.com index = aws_s3 aws_iam_role = XXXXXXXXXXXXXX bucket_name = XXXXXXXXXXXXXX    
Lookup File editor is nice app for end-users, but is now becoming a management head-ache for Splunk admins. the backup folder of lookup editor is now almost 5GB and SHC replication/bundles is taking ... See more...
Lookup File editor is nice app for end-users, but is now becoming a management head-ache for Splunk admins. the backup folder of lookup editor is now almost 5GB and SHC replication/bundles is taking forever. Any nice idea to either -  to ensure only one previous copy is kept - delete backups every x months from lookup editor backup?
Our Heavy Forwarder runs on Windows Server 2016. We currently have Splunk 7.3 Enterprise. We installed the Splunk Connect for Zoom Version 1.0.1 We configured everything based on the documentation... See more...
Our Heavy Forwarder runs on Windows Server 2016. We currently have Splunk 7.3 Enterprise. We installed the Splunk Connect for Zoom Version 1.0.1 We configured everything based on the documentation but see that our heavy forwarder do not listen on port 4443. We have a test heavy forwarder, which is running under linux and there it is working. The netstat command shows the process 20318 is associated with the listening port 4443. This process zoom_input.py. See detailed ps entry below: root 20318 0.0 0.1 244900 25492 ? Sl Aug25 16:07 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_Connect_zoom/bin/zoom_input.py Now our question is what we need to do that this is also working under windows. I do not see that the app is not supported for windows Heavy forwarders.  Many thanks Regards Joël