All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@JohnSmith123  Ensure that your regex in host_transform correctly matches the part of the event data you want to extract as the host. You can test your regex separately to confirm it captures the de... See more...
@JohnSmith123  Ensure that your regex in host_transform correctly matches the part of the event data you want to extract as the host. You can test your regex separately to confirm it captures the desired value. please provide:   The actual REGEX used in host_transform and sourcetype_transform. A sample of the raw event data (_raw). Details about where the configurations are deployed (e.g., heavy forwarder).
Since this is Windows, can you try using full paths in your command line? Example: java -javaagent:"C:\Users\user\Downloads\splunk-otel-javaagent.jar" -jar C:\Projects\my-app\target\my-app-0.0.1-... See more...
Since this is Windows, can you try using full paths in your command line? Example: java -javaagent:"C:\Users\user\Downloads\splunk-otel-javaagent.jar" -jar C:\Projects\my-app\target\my-app-0.0.1-SNAPSHOT.jar Also, please check if "java -version" returns the same result as "mvn -v". It's possible your call to "java" is using a different distro or version that you're not expecting.
Just to be clear, you're integrating Thousandeyes to Splunk Observability Cloud, right? If you're integrating to Splunk Enterprise or Splunk Cloud, you'll want to look at different message boards. ... See more...
Just to be clear, you're integrating Thousandeyes to Splunk Observability Cloud, right? If you're integrating to Splunk Enterprise or Splunk Cloud, you'll want to look at different message boards. In Splunk Observability Cloud, you can use Metric Finder to see if you're ingesting data from Thousandeyes. Look for metrics like "network.latency" or type "network." in the Metric Finder search to see if anything auto-completes. Depending on the type of Thousandeyes tests you're doing, you can also search for things like "http.server.request.availability" Thousandeyes docs might be more helpful: https://docs.thousandeyes.com/product-documentation/integration-guides/custom-built-integrations/opentelemetry/configure-opentelemetry-streams/ui
Hello everyone. I'm trying to set host and sourcetype values with event data. The result is that, the sourcetype is overridden as expected, while the host value is NOT. By applying the following tra... See more...
Hello everyone. I'm trying to set host and sourcetype values with event data. The result is that, the sourcetype is overridden as expected, while the host value is NOT. By applying the following tranforms.conf and props.conf, I expect that The sourcetype is overridden from default:log to mysp(which works as expected). Then, for events with sourcetype mysp, override the host value with my event data using regex extraction(which didn't work).   It's making me confused. Wondering why it didn't work out for host field. Hopefully someone would kindly help me out here. Thanks. transforms.conf [sourcetype_transform] SOURCE_KEY = _raw REGEX = <my_regex> DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::mysp [host_transform] REGEX = <my_regex> FORMAT = host::$1 DEST_KEY = MetaData:Host  props.conf [default:log] TRANSFORMS-force_sourcetype = sourcetype_transform [mysp] TRANSFORMS-force_host = host_transform  
This page might help: https://docs.splunk.com/observability/en/metrics-and-metadata/relatedcontent.html#splunk-infrastructure-monitoring You may want to look at service.name, k8s.cluster.name, k8... See more...
This page might help: https://docs.splunk.com/observability/en/metrics-and-metadata/relatedcontent.html#splunk-infrastructure-monitoring You may want to look at service.name, k8s.cluster.name, k8s.pod.name values to see if they're what you expect.  Other ideas to possibly consider: Are you using the Splunk Helm chart to deploy OTel to your Kubernetes cluster? Did you configure the operator in the Helm chart to instrument your Java app? Did you overwrite or change any resourcedetection processors?
Why i am getting error for one of the indexer from indexer cluster while running a report from particular app. Error is below one. The following error(s) and caution(s) occurred while the search ran... See more...
Why i am getting error for one of the indexer from indexer cluster while running a report from particular app. Error is below one. The following error(s) and caution(s) occurred while the search ran, Therefore, search results might be incomplete. Hide errors. ◦ remote search process failed on peer ◦ Search resuits might be incomplete: the search process on the peerfelog-ldx4.gov.sg] ended prematurely. Check the peer log. such as $SPLUNK_HOME/ar/log/splunk/splunkd.log and as well as the search.log for the particular search. .• [elog-idx04.opsnet.gov.sg] Search process did not exit c exit cleanly, exit_code=111, description="exited with error: Application does not exist: eg_abcapp'. Please look in search.log for this peer in the Job Inspector for more info.    
@rselv21  As @richgalloway  said, These values are generally recommended because they provide a good balance between data availability and storage efficiency.  SF and RF - How much count should we ... See more...
@rselv21  As @richgalloway  said, These values are generally recommended because they provide a good balance between data availability and storage efficiency.  SF and RF - How much count should we keep ? - Splunk Community  Solved: Search factor vs Replication factor-If I change my... - Splunk Community In a clustered environment, the Replication Factor (RF) determines how many copies of each data bucket are maintained across the indexers. For example, with an RF of 3, each data bucket will have 3 copies spread across different indexers. This ensures that if one or two indexers fail, the data is still available on the remaining indexers.  The Search Factor (SF) specifies how many of these replicated copies are searchable. With an SF of 2, two of the replicated copies will be in a searchable state, allowing search heads to query the data even if one of the searchable copies becomes unavailable  With 9 indexers and an RF of 3, each data bucket will be replicated across 3 of the 9 indexers. This means that any given bucket will have 3 copies, ensuring redundancy and high availability.
@br0wall  The "license is expired" error typically occurs when:   You're using a Splunk Enterprise trial license, which expires after 60 days. Your personal installation is not connected to ... See more...
@br0wall  The "license is expired" error typically occurs when:   You're using a Splunk Enterprise trial license, which expires after 60 days. Your personal installation is not connected to your company's Splunk License Master, which manages valid licenses for business accounts. There might be a mix-up between a trial license on your personal computer and your company's enterprise license. If you want to remove those messages, you have two options: 1 - connect your instance to Splunk License Master from your company 2 - Purchase a license for your personal splunk box Note: If you select the option 1, please carefully about the data your are sending to this instance, because all the data that you indexed in your personal box will count against the company license, and maybe you can reach the day capacity license. https://docs.splunk.com/Documentation/Splunk/latest/Admin/HowSplunklicensingworks  Please read this document where you can have more details about how splunk is being licensing. Since you have a business account through your job, your company likely has a valid Splunk license. The issue may arise because your local Splunk installation is using a default trial license instead of connecting to your company's License Master.   https://community.splunk.com/t5/Getting-Data-In/How-to-resolve-quot-Invalid-username-or-password-Your-license-is/m-p/352690  NOTE:  As I said, Splunk Entrerprise Trial Licence is valid for 60 days , I would suggest do complete uninstalltion of splunk in local system and try to install again. Link to download https://www.splunk.com/en_us/download/splunk-enterprise.html  also you can monitor license usgae using  https://docs.splunk.com/Documentation/Splunk/latest/Admin/AboutSplunksLicenseUsageReportView 
Trying to log into splunk, this is my first time putting it on my personal cpu. I have a business account through my job. when i try logging in my password will not work and it says the license is ex... See more...
Trying to log into splunk, this is my first time putting it on my personal cpu. I have a business account through my job. when i try logging in my password will not work and it says the license is expired. 
This is a question only you can answer based on your risk tolerance level and how much storage you have. I recommend RF/SF values of at least 2.  Higher values offer more protection against failure,... See more...
This is a question only you can answer based on your risk tolerance level and how much storage you have. I recommend RF/SF values of at least 2.  Higher values offer more protection against failure, but at the cost of additional storage.
Hi Everyone, Can you please suggest the recommended RF and SF number for Splunk clustered environment with total 9 indexers and 7 search heads in Indexer Clustering set-up.   And also, please let m... See more...
Hi Everyone, Can you please suggest the recommended RF and SF number for Splunk clustered environment with total 9 indexers and 7 search heads in Indexer Clustering set-up.   And also, please let me know the concept regarding how many copies will replicated in each indexers out of 9?
Nice catch about the linux_audit vs. linux_admin. But while I recognize linux_audit, I don't recall ever seeing linux_admin, so that might actually be the typo.
you can use the "Splunk App for SOAR" https://splunkbase.splunk.com/app/6361  
@Mit  Can you check this. https://community.splunk.com/t5/Deployment-Architecture/streamfwd-app-error-in-var-log-splunk-streamfwd-log/m-p/658283  https://community.splunk.com/t5/All-Apps-and-Add-on... See more...
@Mit  Can you check this. https://community.splunk.com/t5/Deployment-Architecture/streamfwd-app-error-in-var-log-splunk-streamfwd-log/m-p/658283  https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-uninstall-Independent-Stream-Forwarder/m-p/278073 
@anthonyi  Identify the sourcetype of your Cisco ISE logs. Common sourcetypes for ISE are cisco:ise:syslog.. You can confirm this in Splunk’s GUI or CLI.   GUI:    Go to Search & Reporting... See more...
@anthonyi  Identify the sourcetype of your Cisco ISE logs. Common sourcetypes for ISE are cisco:ise:syslog.. You can confirm this in Splunk’s GUI or CLI.   GUI:    Go to Search & Reporting. Run a search like index=<indexname> Check the sourcetype field in the events to note the exact name (e.g., cisco:ise:syslog). CLI: The sourcetype for Cisco ISE can be identified in the inputs.conf file through the CLI. Edit props.conf to Truncate Events, You can find out the props.conf details using the below command:   /opt/splunk/bin/splunk btool props list --debug    For instance, if you have the props.conf file located in /opt/splunk/etc/system/local, you can configure it with the following settings:   vi /opt/splunk/etc/system/local/props.conf    Add the following stanza to set the TRUNCATE parameter for your ISE logs:   [your_sourcetype] TRUNCATE = 2000   For example, if your ISE logs have a sourcetype of cisco:ise:syslog, the stanza would be:   [cisco:ise:syslog] TRUNCATE = 2000 This setting ensures that any event exceeding 2000 bytes will be truncated, effectively reducing the size of each event stored in Splunk.   After saving the changes to props.conf, restart your Splunk instance to apply the new configuration.   /opt/splunk/bin/splunk restart   NOTE:  If this add-on is not yet installed, please proceed with the installation. it https://splunkbase.splunk.com/app/1915    Reference for sourcetypes: https://splunk.github.io/splunk-add-on-for-cisco-identity-services/Sourcetypes/   
Seems like this is much more involved than I initially thought. Before you delve into crevices, maybe check something more obvious: rex or regex autoextract itself does not filter results.  You ... See more...
Seems like this is much more involved than I initially thought. Before you delve into crevices, maybe check something more obvious: rex or regex autoextract itself does not filter results.  You sill need a filter to do that. index=accounting sourcetype=linux_admin | rex field=_raw "(?<ssh>\bssh\b)"  Have you tried adding a filter after rex, like this? index=accounting sourcetype=linux_admin | rex field=_raw "(?<ssh>\bssh\b)" | where isnotnull(ssh)  This tells Splunk to return only those events in which the regex has a match. If you use autoextraction as your props.conf shows, to apply filter, you need something like index=accounting sourcetype=linux_admin ssh=* But here is another obvious mismatch. props.conf [linux_audit] TRANSFORMS-changesourcetype = change_sourcetype_authentication This stanza applies to sourcetype linux_audit, NOT linux_admin as suggested in your original search.  Is this a typo when you set up the autoextraction?
Great, in that case you should be able to make the changes in the UI if preferred. Did this work for you?  Did this answer help you? If so, please consider: Adding karma to show it was useful ... See more...
Great, in that case you should be able to make the changes in the UI if preferred. Did this work for you?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dtsao  I'm afraid you lost me at transaction - I dont think I've seen a good usecase for transaction for a number of years, where stats would be much better. The way I would approach this is to ... See more...
Hi @dtsao  I'm afraid you lost me at transaction - I dont think I've seen a good usecase for transaction for a number of years, where stats would be much better. The way I would approach this is to use something like foreach to loop through your array/multival field to set a fixed field with the value you are trying to transaction against. Once you've got this you should be able to do things with stats like | stats range(_time) as timeRange, count, etc BY yourField If you're able to provide some sample data (redacted if needed) then I'd be happy to create a full query for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @fraserphillips  I havent got access to ES today to check this, however it could be the context of the app you are using for the search, in the video can you see which app they are in when they r... See more...
Hi @fraserphillips  I havent got access to ES today to check this, however it could be the context of the app you are using for the search, in the video can you see which app they are in when they run the search? Are you in the same app when you run your search? It could be that the event action is only  When you are running your search, is it in the same app? (Im assuming Mission Control or ES app..) If you're able to share a link to the video I can check for you although I have a feeling that this is an ES7 feature that might not be in ES8 (Yet?) - The more I think about it, the more I think this behaviour is different in ES8 and you're expected to create Investigations from the Analyst Queue and then work from there?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @k1green97  Check out the following, if your Field2 is a multivalue field you should be good with a 'where IN': | where Field1 IN (Field2)  Full example: | windbag | head 25 | streamstats coun... See more...
Hi @k1green97  Check out the following, if your Field2 is a multivalue field you should be good with a 'where IN': | where Field1 IN (Field2)  Full example: | windbag | head 25 | streamstats count as Field1 | table _time Field1 | eval Field2=split("27,33,17,22,24,31,29,08,36",",") | where Field1 IN (Field2)   HOWEVER, if as it looks on the table you posted that for Row 1, Field1=17 Field2=27 but you want to check if Field1 is in the combined list of Field2 values then you will need to group them together first using eventstats: | eventstats values(Field2) as Field2 | where Field1 IN (Field2) Full example: | makeresults count=9 | streamstats count as _n | eval Field1=case(_n=1, 17, _n=2, 24, _n=3, 36) | eval Field2=case(_n=1, 27, _n=2, 33, _n=3, 17, _n=4, 22, _n=5, 24, _n=6, 31, _n=7, 29, _n=8, 8, _n=9, 36) | fields - _time ``` finished data sample ``` | eventstats values(Field2) as Field2 | where Field1 IN (Field2)    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing