All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Bumping this thread. I'd like a solution to this post too. Below is Simple XML code I have used. <table> <search></search> <format type="color" field="Health"> <colorPalette type="map">{... See more...
Bumping this thread. I'd like a solution to this post too. Below is Simple XML code I have used. <table> <search></search> <format type="color" field="Health"> <colorPalette type="map">{"Critical":#6A5C9E, "Abnormal":#6A5C9E, "Normal":#65A637}</colorPalette> </format> </table> See output, image below. (Dashboard left, PDF right.) Splunk Cloud Version 9.0.2303.201 Experience: Classic Links to Splunk Cloud docs https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML  https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/DashboardPDFs
only 1 of my indexer having this issue  during upgrade. firstly , i use domain account to install  Then encounter this issue : Splunk Enterprise Setup Wizard ended prematurely Splunk Enterpri... See more...
only 1 of my indexer having this issue  during upgrade. firstly , i use domain account to install  Then encounter this issue : Splunk Enterprise Setup Wizard ended prematurely Splunk Enterprise Setup Wizard ended prematurely because of an error.  Your system has not been modified.  To install this program at a later time, run Setup Wizard again.  Click the Fiinish button to exit the Setup Wizard.   Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys after then i refer to this KB : https://community.splunk.com/t5/Installation/Why-does-Splunk-upgrade-to-version-9-1-0-1-end-prematurely/m-p/652791 follow this step : Solution: Install Splunk from the command line and use the LAUNCHSPLUNK=0 flag to keep Splunk Enterprise from starting after installation has completed. For example : PS C:\temp> msiexec.exe /i splunk-9.0.4-de405f4a7979-x64-release.msi LAUNCHSPLUNK=0 You can complete the installation, and before running SPLUNK, you need to grant the user "Full Control" permissions to the Splunk Enterprise installation directory and all of its subdirectories. Splunk upgraded succesufully to 9.1.2 but not able to start . i changed to local admin and try repair but yet still hit  same error : Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys my splunk services still not able to start and cant see any error in logs. The Splunk services are still unable to start, and there are no apparent errors in the logs. Can anyone provide assistance with this issue?  
Hello, can you tell us what steps did you take to reduce the risk score?
Have a simple dashboard filtering logs by AppID's and related Servername. First dropdown search defaults to "*" for all AppID's but search obtains all AppID's which you can select and has a token ... See more...
Have a simple dashboard filtering logs by AppID's and related Servername. First dropdown search defaults to "*" for all AppID's but search obtains all AppID's which you can select and has a token for $AppID$ eg. "index="applogs" sourcetype="logs:apps:inventory" | table AppID | dedup AppID | sort AppID" Second dropdown searches by $AppID$ token of First dropdown, to get the list of Servernames returned for selected AppID eg. "$AppID$" index="syslogs" sourcetype="logs:servers:inventory" | eval Servername = host."\\".InstanceName | table AppID Servername | dedup Servername | sort Servername This has a token for $Servername|s$ (escape chars in server name), which gets added to a bunch of search panels. For example, select App49 in first dropdown, and it returns ServerA, ServerB, ServerC, ServerD in the second dropdown. Selecting ServerA, B, C or D in the second dropdown then searches bunch of panels filter by that Servername token. Thats all working fine, but by default I want the option to search all panels by all $Servername$ options in the second dropdown related to the selected AppID. Adding a "*" wildcard option in second dropdown as in the first, just returns all Servernames, not the ones filtered by the $AppID$ token. How can I default my second drop down to an "All" option that does this? eg. searches all panels by all the results that get populated in the second dropdown from the $AppID$ of the first?
yes i can see the output in the column from the below search  source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = ... See more...
yes i can see the output in the column from the below search  source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024/1024, 3)       How do i convert the column into GB  value , also when i filter last 30 days i am able to see only last 7 days instead 30 days.  How do i fix this issue    Note - When i specify the index , like for example index=aws i am not getting any search result from the search query ??   Thanks   
I also want to find out whether anyone was able to integrate dbconnect with Cyberark or Hashicorp?
Thank you so much. This is really awesome .
So that worked!  However, it created a new problem.  It runs sooo long. I was talking another engineer and he suggested to do outputlookup just for the small _ad index. than use that data to match ... See more...
So that worked!  However, it created a new problem.  It runs sooo long. I was talking another engineer and he suggested to do outputlookup just for the small _ad index. than use that data to match just like in this example by @ITWhisperer But i m not able to combine and get the output.  I'm thinking because i need to match on 2 columns instead of single one as below?  search index | table host | dedup host | append [ | inputlookup lookupfile | table host | dedup host ] | stats count by host | eval match=if(count=1, "missing", "ok")  
You can use the licence ingest log data for that also index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* type=Usage | timechart span=1d sum(b) as bytes by idx | foreach * [ ev... See more...
You can use the licence ingest log data for that also index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* type=Usage | timechart span=1d sum(b) as bytes by idx | foreach * [ eval <<FIELD>>=round(<<FIELD>> / 1024 / 1024, 3) ] Round/Divide as needed to get the appropriate size unit
Hi, We are ingesting data into Splunk Cloud to below index: ------------- index=zscaler source=firewall -------------   Is there a way we can forward this (from Slunk Cloud) to Trend Micro's HTT... See more...
Hi, We are ingesting data into Splunk Cloud to below index: ------------- index=zscaler source=firewall -------------   Is there a way we can forward this (from Slunk Cloud) to Trend Micro's HTTPS API or a TCP stream?   Thanks in advance for any help!
@vennemp @Anurag_Byakod  I had this same problem and it stemmed from certificate file formatting. I ended up running an openssl x509 -in idpCert.pem -out idpCert1.pem And I pointed the SAML config... See more...
@vennemp @Anurag_Byakod  I had this same problem and it stemmed from certificate file formatting. I ended up running an openssl x509 -in idpCert.pem -out idpCert1.pem And I pointed the SAML config at the idpCert1.pem, reloaded auth, logged out of the admin account and I was logged right in.  After diff-ing the two, it seems that when copying and pasting (into cat, in my case) the cert info from Okta, it doesn't maintain the format. And while running an openssl x509 -in idpCert.pem -noout -text _will_ give you the correct output for the cert, something in Splunk barfs when it sees the bad formatting. Outputting it to a new pem using openssl formats it correctly.  Good luck
it SHOULD have access - I don't see any errors or anything. The only thing that comes up is  "Parsing configuration stanza: monitor://c:\users\*\appdata\local\apps\*\app.log." but no errors...
Sorry i really dont understand
I think you have right idea on all counts.  Migrating the CM is similar to migrating a SH.  Do migrate the CM before the indexers.
You have not answered fundamental questions about your dataset.  See my comment. BTW, once you use groupby, a single aggregation function will no longer result in field name corresponding to your AS... See more...
You have not answered fundamental questions about your dataset.  See my comment. BTW, once you use groupby, a single aggregation function will no longer result in field name corresponding to your AS clause.  This is why operation on Usage will not do anything. (Multiple aggregation functions will result in composite field names.  Again, operation on Usage will not do anything.)
Could you explain what's wrong with the original search?  What is expected and what is the actual results?  Importantly, what is the logic in your original search to meet your expectation? If I have... See more...
Could you explain what's wrong with the original search?  What is expected and what is the actual results?  Importantly, what is the logic in your original search to meet your expectation? If I have to read your mind based on the code snippet, you are saying that  the main search should give you searches that has NOT produced notables; (Question: Why you are searching for action.notable=1 not action.notable=0?) the subsearch should give you searches that has produced notables; (Note: Nobody in this forum except yourself knows what the dataset looks like.  So, always explain dataset and logic.) The difference between 1 and 2 would give you something? If I put down whether action.notable should be 1 or 0, i.e., assuming that  has_triggered_notables = "false" is the correct label for the main search, it should have zero overlap with the subsearch which you labeled as has_triggered_notables = "true".  This means an outer join should give you everything in the main search.  Is this what you see?  Why would you expect anything different?  Again, nobody in the forum except yourself has that answer. Maybe action.notable is not something to indicate whether a notable is produced?  Maybe this field doesn't even exist?  You used the phrase "status enabled" to describe your criteria.  But saved searches has no "enabled" or "not enabled" statuses.  Do you mean scheduled, as discernible from is_scheduled field, nothing to do with the nonexistent action.notable? If you ask an unanswerable question, no one is able to give you an answer.  And this one is full of hallmarks of unanswerable questions. Before I give up, let me make a final wild guess: By "enabled" you mean is_scheduled=1, there is nothing about action.notable, and that the subsearch actually does something as I speculated above (2).  In that case, this is a search you can try and tweak that doesn't involve an inefficient join. | rest /services/saved/searches | search title="*Rule" is_scheduled=1 NOT [search index=notable search_name="*Rule" orig_action_name=notable | stats values(search_name) as title] | fields title
Verify splunk has read access to the file.  Check splunkd.log for messages about reading the file.
I will try this out. Thanks!
Thanks! I just tried it - it doesn't SEEM to be working, I'm not getting any data in splunk even  though I know the files are being updated. Looking at the index (just searching index=someapp) retur... See more...
Thanks! I just tried it - it doesn't SEEM to be working, I'm not getting any data in splunk even  though I know the files are being updated. Looking at the index (just searching index=someapp) returns no data (index does exist). This is what I have: [monitor://c:\users\*\appdata\local\someapp\apps\*\app.log] index = someapp sourcetype=someapp disabled=0  
Yes I tried the outcome is blank  Question - do i need to select the time frame like last 7 days or 30 days