All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are currently looking to migrate our Standalone Splunk Enterprise server from the existing ec2 to a new ec2. The new server will be based on AWS Marketplace AMI for version 9.0.8: https://aws.amaz... See more...
We are currently looking to migrate our Standalone Splunk Enterprise server from the existing ec2 to a new ec2. The new server will be based on AWS Marketplace AMI for version 9.0.8: https://aws.amazon.com/marketplace/pp/prodview-l6oos72bsyaks?sr=0-1&ref_=beagle&applicationId=AWSMPContessa Reason for migration: we want to use the Marketplace AMI while upgrading the Splunk Version Since this migration involves going from version 8 to version 9, just copying over the apps(they contain the indexes) hasn't given us the result we wanted in our dev/test environment. We end up with no search UI loaded when the search app is copied over from the previous version 8 server to the version 9 server.  Has anyone else migrated their server this way i.e. jumping versions while migrating to the new server? What would the community recommend in terms of a scenario that we currently have? Would an in place upgrade to version 9 and then copy over to the new server be a better option/recommended?       
Hello Sirs, I would like to know the most useful Splunk App that can be suitable for Linux Auditd events. I have Linux devices such as Mangement Servers, DNS, HTTP Servers, Firewall, etc. These logs... See more...
Hello Sirs, I would like to know the most useful Splunk App that can be suitable for Linux Auditd events. I have Linux devices such as Mangement Servers, DNS, HTTP Servers, Firewall, etc. These logs carried by both Syslog Forwarder and Heavy forwarders.  Please suggest how to monitor the audit logs by which Splunk App? Thanks a bunch.
I have created indexes on web interface of splunk cloud and it was not specific to any app and since i have created those indexes i think we can rule out the possibility of them being system-defined.
Hi All,  I have searched the community threads for posts similar to this, but none have quite addressed the issue I am seeing.  I have Splunk Cloud 9.1.2 and would like to retrieve logging from Sno... See more...
Hi All,  I have searched the community threads for posts similar to this, but none have quite addressed the issue I am seeing.  I have Splunk Cloud 9.1.2 and would like to retrieve logging from Snowflake.  Following this snowflake integration blog I have installed the Splunk DB Connect app (3.15.0) and the Splunk DXB Add-on for Snowflake JDBC (1.2.1). (using the self-service app install process on Victoria experience) When creating the identity in Splunk (matching the user created in Snowflake) this works fine, however creating the connection fails to validate (after trying for approximately 4-5 minutes) and gives me the following non-descript error:  connection string:      jdbc‌‌‌‌:snowflake://<account>.<region>.snowflakecomputing.com/?user=SPLUNK_USER&db=SNOWFLAKE&role=SPLUNK_ROLE&application=SPLUNK     Output: In the logs I can see slightly more:    2024-02-26 00:59:26.501 +0000 [dw-868 - GET /api/connections/SnowflakeUser/status] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=validation connection_name=SnowflakeUser stanza_name= state=error sql='SELECT current_date()' message='JDBC driver: query has been canceled by user.'   This appears to hit some sort of timeout for the JDBC driver. The other thing I can see is the stanza appears to be blank in this result. However the default Snowflake stanza in the DB connect app matches the stanza created in the Snowflake blogpost.  Any troubleshooting help would be much appreciated. 
A unique feature of Ingest Actions is that they apply to cooked data.  That means they can be installed on indexers and still work on events that come from an HF.
Do you need to deploy to indexers (as well) ? when there is an intermediary HF ? All my config is on a HF 
If you post your existing XML it would be helpful, but I am assuming you have something like <drilldown> <set token="token_icid">$row.icid$</set> </drilldown> so there are a number of ways to do ... See more...
If you post your existing XML it would be helpful, but I am assuming you have something like <drilldown> <set token="token_icid">$row.icid$</set> </drilldown> so there are a number of ways to do what you want, but one way is to make and additional constraint for icid that is either empty or the check, as the rest of the search is the same. <drilldown> <set token="token_icid">$row.icid$</set> <eval token="token_query">if($row.icid$=0, "icid=\"".$row.icid$."\" OR ", "")</eval> </drilldown> Then your search can be index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ($token_query$ mid="$token_mid$" OR "MID $token_mid$") so you just add $token_query$ which is either empty or the addition icid constraint.  
Your example is a little unclear, because it stats index=other has i-abcdef1234567 but in the next statement sats it is filtered out i-abcdef1234567  because it was NOT in index=other Hopefully the ... See more...
Your example is a little unclear, because it stats index=other has i-abcdef1234567 but in the next statement sats it is filtered out i-abcdef1234567  because it was NOT in index=other Hopefully the following example demonstrates the principle. I am using makeresults to simulate your data set. The stats values combines the two and then the where clause is what you use for your exclusion logic. If that is not correct based on the above discrepancy, adjust as necessary. You can remove the where clause to see what the data looks like first   | makeresults | eval index="main", ResourceId=split("i-1234567abcdef,i-abcdef1234567,sg-12345abcde,abc", ",") | mvexpand ResourceId | append [ | makeresults ``` and the index=other search returns InstanceId: i-abcdef1234567 ``` | eval index="other", InstanceId=split("i-abcdef1234567,i-abcdef1234569",",") ] | fields - _time ``` The above is just simulating your data setup ``` | eval ResourceId=coalesce(ResourceId, InstanceId) | stats values(index) as index dc(index) as indexes by ResourceId | where (indexes=1 AND index="main") OR indexes=2 ``` I need the results to be (filtered out i-1234567abcdef because it was not returned by index=other): i-abcdef1234567 sg-12345abcde```  
It's common for knowledge objects in Splunk Cloud to be undeletable if they are defined in the default directory of an app.  In that case, you must edit the app and re-upload it. Some indexes cannot... See more...
It's common for knowledge objects in Splunk Cloud to be undeletable if they are defined in the default directory of an app.  In that case, you must edit the app and re-upload it. Some indexes cannot be deleted because they are system-defined.
Thank you for your response. I will check it out 
Before volunteers can help you achieve something, you need to explain what is it that you are trying to achieve without SPL (or ChatGPT). What do you mean "in a drilldown?"  You can have a drilldow... See more...
Before volunteers can help you achieve something, you need to explain what is it that you are trying to achieve without SPL (or ChatGPT). What do you mean "in a drilldown?"  You can have a drilldown only when you have an initial search (in a dashboard panel).  What are the output of that search look like?  Your code snippets suggest that you want to set tokens from that output.  Is this correct?  Which column from the initial search is designated to populate which token? What do you mean by "2 possible queries" when you "have a (aka ONE) drilldown?"  Do you mean you have two other panels on the same dashboard that could use the token(s) populated by this drilldown? Again, take away SPL, can you illustrate some data from the initial panel (anonymize as needed), then illustrate (aka tabulate) the end state of the two panels you wish to alter with this drilldown, and explain how the data is related to the end state (without SPL)? If any SPL is "not working", you need to explain/illustrate data, then describe/illustrate actual output, illustrate expected output, explain why it is reasonable to arrive at that expected output.  Sometimes you also need to explain how the two outputs are different if it is not painfully obvious.
Sorry, I made a typo in the search time that gets me what I need it was supposed to say: | eval CommandHistory = commandHistory_sed I can make the effect happen in search time, the issue is I n... See more...
Sorry, I made a typo in the search time that gets me what I need it was supposed to say: | eval CommandHistory = commandHistory_sed I can make the effect happen in search time, the issue is I need to figure out how to have this effect applied at ingest time so the effect is automatically applied to all of the events.
The rex command needs the name of an existing field in the field option.  Try this | eval commandHistory = CommandHistory | rex field=commandHistory mode=sed "s/\¶/\n/g"  
Hello, Currently I'm attempting to make a CommandHistory field a bit more readable for our analysts but I'm having trouble getting the formatting correct or maybe I'm just using the wrong command ... See more...
Hello, Currently I'm attempting to make a CommandHistory field a bit more readable for our analysts but I'm having trouble getting the formatting correct or maybe I'm just using the wrong command or taking the wrong approach. Basically our EDR dumps recent commands ran on a system into the CommandHistory field separated by a ¶ symbol. I'm trying to just replace that with a new line at ingestion time.  Made up example of what's in CommandHistory at the moment (I don't want to use real data I apologize): command1 -q lifeishard¶ReallyLong Command -t LifeIsHarderWhenYouCantFigureItOut¶ThirdCommand -u switchesare -cool¶One more command The search time commands that get me what I want in a field called commandHistory_sed: | eval commandHistory = CommandHistory | rex field=commandHistory_sed mode=sed "s/\¶/\n/g" This ends up looking like this: command1 -q lifeishard ReallyLong Command -t LifeIsHarderWhenYouCantFigureItOut ThirdCommand -u switchesare -cool One more command What I've tried in props.conf:  SEDCMD-substitute = 's/\¶/\n/g'  SEDCMD-alter = 's/\¶/\n/g' Neither work. We have many other Eval and FIELDALIAS statements under this sourcetype in props.conf that are functioning fine so I think I'm just not formatting the SED properly or I'm not taking the right approach. Does anyone have any advice on what I am doing wrong and what I need to do to achieve the result? Thank you for any help in advance!
When it comes to pulling in requirements for a dashboard, the number one requirement I have is "what is the core point of the overall dashboard?"  Your questions are good, but may create a "metrics a... See more...
When it comes to pulling in requirements for a dashboard, the number one requirement I have is "what is the core point of the overall dashboard?"  Your questions are good, but may create a "metrics at the cost of use" mindset where people are so worried about building metrics and pre-answering questions that they don't just sit down and build something useful, quickly.  Also, your approach may work against interrelated dashboards that drill down from one to the other, a good usage model I've used very effectively, and you also see on the monitoring console (MC).   Dashboards aren't something to be greedy or stingy about.  Let people build them.   That said, my core rule of thumb is something you may not have noticed:  Good dashboards minimize scrolling.  That means that a well written dashboard, users don't have to scroll (much).  Use pop-up panels (with close links for the pop-up), use drill downs to other dashboards or just build highly targeted dashboards.  It helps, a lot.
The question of which inputs to enable or not is always a "the inputs that provide the logs you care about".  You don't have to worry about the HF vs UF question in this area. That said, make sure y... See more...
The question of which inputs to enable or not is always a "the inputs that provide the logs you care about".  You don't have to worry about the HF vs UF question in this area. That said, make sure you aren't routing through a HF just for the sake of it.  HFs should only be typically used for one of a few reasons: When you cannot route (actual routing, not just firewall) from the UF to the IDX (either on time schedule or permanently) When you need modular inputs Almost every other scenario, it's best to not go through a HF.  Your question makes me suspect you may be routing through a HF. When a HF (or IDX) receives logs from another source, it will "cook" or parse the logs then send them according to the outputs.conf.  There are no log types being discussed here that put you in danger of a logarithmic volume growth or logging loop.   
I have created some indexes on splunk cloud can we not delete this indexes ? Because the option for delete is disabled in splunk cloud , can anyone help with this ?   
Thank you @ITWhisperer 
Hi, Ironstream isn't a Splunk product, and IBM mainframes and minis are protected by a rather large paywall. Have you talked to your Ironsteam account/support team? Ironstream documentation is open,... See more...
Hi, Ironstream isn't a Splunk product, and IBM mainframes and minis are protected by a rather large paywall. Have you talked to your Ironsteam account/support team? Ironstream documentation is open, but I don't see a reference to ESDS or other data sets. If Ironstream can read a data set and deserialize its records to UTF-8, it should be technically possible for Splunk to receive the data.
Would the Akamai add-on work with the Akamai Prolexic Analytics API ?