All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah sorry - I should have mentioned indexes for the panels based on "Servername" have no relation to the AppID. So I can't query the panels by AppID token. The AppID index is searching a bunch of l... See more...
Ah sorry - I should have mentioned indexes for the panels based on "Servername" have no relation to the AppID. So I can't query the panels by AppID token. The AppID index is searching a bunch of logs that have a field for AppID and two fields (host & node) which im using an eval to join together for the "Servername", which relates to a field in all the Servername index logs, giving me the one-way relation to what server(s) an AppID is running on. Which is silly, but thats the logs im dealing with... and hence the problem with a wildcard "*" selection on the second dropdown, it just returns any Servername, not ones filtered by AppID ie. it's own dropdown query. Thats kinda why I'm wondering if the second dropdown already creates a list of the Servername(s) related to a specific AppID, how can i have all Servername(s) dropped/tokenized into the search query for each panel, not just a single Servername based on the token from selecting a second dropdown Servername but the whole list of dropdown options for Servername(s) tokenized or getting passed as tokens into the panel search queries - ie. a dropdown option for ALL...
Firstly I would suggest your search for the second dropdown change slightly to index="syslogs" sourcetype="logs:servers:inventory" AppID=$AppID|s$ | eval Servername = host."\\".InstanceName | field... See more...
Firstly I would suggest your search for the second dropdown change slightly to index="syslogs" sourcetype="logs:servers:inventory" AppID=$AppID|s$ | eval Servername = host."\\".InstanceName | fields Servername | dedup Servername | sort Servername that will be slightly more efficient. You should add the wildcard option in the second dropdown, but in your panel searches you also need to include the  AppID=$AppID|s$ as part of that search, so the * for ServerName will also be restricted to those in your chosen AppID
After a timechart split by a field you cannot use the field name after the timechart as it no longer exists. The field names are the values of your 'series' field. You need to use the foreach method... See more...
After a timechart split by a field you cannot use the field name after the timechart as it no longer exists. The field names are the values of your 'series' field. You need to use the foreach method in your initial post The reason why index=aws returns nothing is that the data you are searching for does not exist in that index. What makes you think it did? It is in the _internal index, so you should definitely include that Change your earliest/latest settings to define the time period you want to search for, or use the time picker instead and remove the earliest and latest totally.  
These settings can be applied universal forwarder
Is it possible to attach two DUO consoles to the splunk API - we have a standard console and are soft migrating to DUO Federal, and would like visibility/ ingestion for both in Splunk:  We see the op... See more...
Is it possible to attach two DUO consoles to the splunk API - we have a standard console and are soft migrating to DUO Federal, and would like visibility/ ingestion for both in Splunk:  We see the option to edit the DUO Splunk connector, not add a second one.  Thank you! 
Bumping this thread. I'd like a solution to this post too. Below is Simple XML code I have used. <table> <search></search> <format type="color" field="Health"> <colorPalette type="map">{... See more...
Bumping this thread. I'd like a solution to this post too. Below is Simple XML code I have used. <table> <search></search> <format type="color" field="Health"> <colorPalette type="map">{"Critical":#6A5C9E, "Abnormal":#6A5C9E, "Normal":#65A637}</colorPalette> </format> </table> See output, image below. (Dashboard left, PDF right.) Splunk Cloud Version 9.0.2303.201 Experience: Classic Links to Splunk Cloud docs https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML  https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/DashboardPDFs
only 1 of my indexer having this issue  during upgrade. firstly , i use domain account to install  Then encounter this issue : Splunk Enterprise Setup Wizard ended prematurely Splunk Enterpri... See more...
only 1 of my indexer having this issue  during upgrade. firstly , i use domain account to install  Then encounter this issue : Splunk Enterprise Setup Wizard ended prematurely Splunk Enterprise Setup Wizard ended prematurely because of an error.  Your system has not been modified.  To install this program at a later time, run Setup Wizard again.  Click the Fiinish button to exit the Setup Wizard.   Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys after then i refer to this KB : https://community.splunk.com/t5/Installation/Why-does-Splunk-upgrade-to-version-9-1-0-1-end-prematurely/m-p/652791 follow this step : Solution: Install Splunk from the command line and use the LAUNCHSPLUNK=0 flag to keep Splunk Enterprise from starting after installation has completed. For example : PS C:\temp> msiexec.exe /i splunk-9.0.4-de405f4a7979-x64-release.msi LAUNCHSPLUNK=0 You can complete the installation, and before running SPLUNK, you need to grant the user "Full Control" permissions to the Splunk Enterprise installation directory and all of its subdirectories. Splunk upgraded succesufully to 9.1.2 but not able to start . i changed to local admin and try repair but yet still hit  same error : Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys my splunk services still not able to start and cant see any error in logs. The Splunk services are still unable to start, and there are no apparent errors in the logs. Can anyone provide assistance with this issue?  
Hello, can you tell us what steps did you take to reduce the risk score?
Have a simple dashboard filtering logs by AppID's and related Servername. First dropdown search defaults to "*" for all AppID's but search obtains all AppID's which you can select and has a token ... See more...
Have a simple dashboard filtering logs by AppID's and related Servername. First dropdown search defaults to "*" for all AppID's but search obtains all AppID's which you can select and has a token for $AppID$ eg. "index="applogs" sourcetype="logs:apps:inventory" | table AppID | dedup AppID | sort AppID" Second dropdown searches by $AppID$ token of First dropdown, to get the list of Servernames returned for selected AppID eg. "$AppID$" index="syslogs" sourcetype="logs:servers:inventory" | eval Servername = host."\\".InstanceName | table AppID Servername | dedup Servername | sort Servername This has a token for $Servername|s$ (escape chars in server name), which gets added to a bunch of search panels. For example, select App49 in first dropdown, and it returns ServerA, ServerB, ServerC, ServerD in the second dropdown. Selecting ServerA, B, C or D in the second dropdown then searches bunch of panels filter by that Servername token. Thats all working fine, but by default I want the option to search all panels by all $Servername$ options in the second dropdown related to the selected AppID. Adding a "*" wildcard option in second dropdown as in the first, just returns all Servernames, not the ones filtered by the $AppID$ token. How can I default my second drop down to an "All" option that does this? eg. searches all panels by all the results that get populated in the second dropdown from the $AppID$ of the first?
yes i can see the output in the column from the below search  source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = ... See more...
yes i can see the output in the column from the below search  source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024/1024, 3)       How do i convert the column into GB  value , also when i filter last 30 days i am able to see only last 7 days instead 30 days.  How do i fix this issue    Note - When i specify the index , like for example index=aws i am not getting any search result from the search query ??   Thanks   
I also want to find out whether anyone was able to integrate dbconnect with Cyberark or Hashicorp?
Thank you so much. This is really awesome .
So that worked!  However, it created a new problem.  It runs sooo long. I was talking another engineer and he suggested to do outputlookup just for the small _ad index. than use that data to match ... See more...
So that worked!  However, it created a new problem.  It runs sooo long. I was talking another engineer and he suggested to do outputlookup just for the small _ad index. than use that data to match just like in this example by @ITWhisperer But i m not able to combine and get the output.  I'm thinking because i need to match on 2 columns instead of single one as below?  search index | table host | dedup host | append [ | inputlookup lookupfile | table host | dedup host ] | stats count by host | eval match=if(count=1, "missing", "ok")  
You can use the licence ingest log data for that also index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* type=Usage | timechart span=1d sum(b) as bytes by idx | foreach * [ ev... See more...
You can use the licence ingest log data for that also index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* type=Usage | timechart span=1d sum(b) as bytes by idx | foreach * [ eval <<FIELD>>=round(<<FIELD>> / 1024 / 1024, 3) ] Round/Divide as needed to get the appropriate size unit
Hi, We are ingesting data into Splunk Cloud to below index: ------------- index=zscaler source=firewall -------------   Is there a way we can forward this (from Slunk Cloud) to Trend Micro's HTT... See more...
Hi, We are ingesting data into Splunk Cloud to below index: ------------- index=zscaler source=firewall -------------   Is there a way we can forward this (from Slunk Cloud) to Trend Micro's HTTPS API or a TCP stream?   Thanks in advance for any help!
@vennemp @Anurag_Byakod  I had this same problem and it stemmed from certificate file formatting. I ended up running an openssl x509 -in idpCert.pem -out idpCert1.pem And I pointed the SAML config... See more...
@vennemp @Anurag_Byakod  I had this same problem and it stemmed from certificate file formatting. I ended up running an openssl x509 -in idpCert.pem -out idpCert1.pem And I pointed the SAML config at the idpCert1.pem, reloaded auth, logged out of the admin account and I was logged right in.  After diff-ing the two, it seems that when copying and pasting (into cat, in my case) the cert info from Okta, it doesn't maintain the format. And while running an openssl x509 -in idpCert.pem -noout -text _will_ give you the correct output for the cert, something in Splunk barfs when it sees the bad formatting. Outputting it to a new pem using openssl formats it correctly.  Good luck
it SHOULD have access - I don't see any errors or anything. The only thing that comes up is  "Parsing configuration stanza: monitor://c:\users\*\appdata\local\apps\*\app.log." but no errors...
Sorry i really dont understand
I think you have right idea on all counts.  Migrating the CM is similar to migrating a SH.  Do migrate the CM before the indexers.
You have not answered fundamental questions about your dataset.  See my comment. BTW, once you use groupby, a single aggregation function will no longer result in field name corresponding to your AS... See more...
You have not answered fundamental questions about your dataset.  See my comment. BTW, once you use groupby, a single aggregation function will no longer result in field name corresponding to your AS clause.  This is why operation on Usage will not do anything. (Multiple aggregation functions will result in composite field names.  Again, operation on Usage will not do anything.)