All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @JohnSmith123  I dont think you get two-bites of the props.conf cherry when changing a sourcetype name, instead you need to apply your host_transform to the "default:log" sourcetype rather than t... See more...
Hi @JohnSmith123  I dont think you get two-bites of the props.conf cherry when changing a sourcetype name, instead you need to apply your host_transform to the "default:log" sourcetype rather than the new sourcetype name. Try the following: == props.conf == [default:log] TRANSFORMS-force_sourcetype = sourcetype_transform TRANSFORMS-force_host = host_transform == transforms.conf == [sourcetype_transform] SOURCE_KEY = _raw REGEX = <my_regex> DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::mysp [host_transform] REGEX = <my_regex> FORMAT = host::$1 DEST_KEY = MetaData:Host  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
See my response in your other thread.
Hi @sudha_krish  Sending out over HTTP does not use an open HTTP standard/API - it uses Splunk2Splunk protocol wrapped in HTTP, so therefore it is only supported for sending to other Splunk systems.... See more...
Hi @sudha_krish  Sending out over HTTP does not use an open HTTP standard/API - it uses Splunk2Splunk protocol wrapped in HTTP, so therefore it is only supported for sending to other Splunk systems.  If you want to send data to a non-Splunk system you can look at the syslog forwarding, however this sends the raw events before they are parsed. For more information on sending to external systems please check out https://docs.splunk.com/Documentation/SplunkCloud/latest/Forwarding/Forwarddatatothird-partysystemsd  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@sudha_krish  Splunk forwarders (Universal or Heavy) send only raw event data to non-Splunk systems over TCP or syslog by default, as outlined in the Splunk documentation. Metadata such as host, sou... See more...
@sudha_krish  Splunk forwarders (Universal or Heavy) send only raw event data to non-Splunk systems over TCP or syslog by default, as outlined in the Splunk documentation. Metadata such as host, sourcetype, source, and index is internal to Splunk and not included in the raw event payload.    To forward logs with metadata over HTTP reliably, tools like Cribl Stream are commonly used. These tools can intercept Splunk data, enrich it with metadata, and send it to third-party systems via HTTP.    Cribl (https://cribl.io/) allows you to route events to multiple systems but maintain full metadata. In addition, you can be very selective about what goes where and you can reshape and enrich events as they're moving.
That is correct. When Splunk sends data over "plain TCP" connection it just sends raw event (there is some degree of configurability but AFAIR it's limited to sending syslog priority header, maybe a ... See more...
That is correct. When Splunk sends data over "plain TCP" connection it just sends raw event (there is some degree of configurability but AFAIR it's limited to sending syslog priority header, maybe a timestamp). If you wanted to send the event metadata (sourcetype/source/host) along with the event you'd have to rewrite the event's raw contents. But if you want to retain the event and index it locally along with sending it out you most probably want to index it in an unchanged form. And it's where it's getting complicated. You'd have to use the CLONE_SOURCETYPE functionality to duplicate your event and split its processing path. Then send the original one to your indexer(s) and the cloned one you can modify and route to your TCP output.
@livehybrid  @PickleRick  @gcusello  Thanks for your responses. I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I di... See more...
@livehybrid  @PickleRick  @gcusello  Thanks for your responses. I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I did not receive Splunk metadata like host, sourcetype, source, and index on the third-party system.
Thanks for your answer,  I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I did not receive Splunk metadata like host... See more...
Thanks for your answer,  I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I did not receive Splunk metadata like host, sourcetype, source, and index on the third-party system
I want to forward logs to a third-party system over HTTP, but I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I did ... See more...
I want to forward logs to a third-party system over HTTP, but I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I did not receive Splunk metadata like host, sourcetype, source, and index on the third-party system. Is it possible to forward logs with metadata to a third-party system over HTTP? If not, how can I get Splunk metadata over TCP? Can anyone suggest a solution? @splunk @splunkent2 @Splunk9 @msplunk @splunk0 
@JohnSmith123  Ensure that your regex in host_transform correctly matches the part of the event data you want to extract as the host. You can test your regex separately to confirm it captures the de... See more...
@JohnSmith123  Ensure that your regex in host_transform correctly matches the part of the event data you want to extract as the host. You can test your regex separately to confirm it captures the desired value. please provide:   The actual REGEX used in host_transform and sourcetype_transform. A sample of the raw event data (_raw). Details about where the configurations are deployed (e.g., heavy forwarder).
Since this is Windows, can you try using full paths in your command line? Example: java -javaagent:"C:\Users\user\Downloads\splunk-otel-javaagent.jar" -jar C:\Projects\my-app\target\my-app-0.0.1-... See more...
Since this is Windows, can you try using full paths in your command line? Example: java -javaagent:"C:\Users\user\Downloads\splunk-otel-javaagent.jar" -jar C:\Projects\my-app\target\my-app-0.0.1-SNAPSHOT.jar Also, please check if "java -version" returns the same result as "mvn -v". It's possible your call to "java" is using a different distro or version that you're not expecting.
Just to be clear, you're integrating Thousandeyes to Splunk Observability Cloud, right? If you're integrating to Splunk Enterprise or Splunk Cloud, you'll want to look at different message boards. ... See more...
Just to be clear, you're integrating Thousandeyes to Splunk Observability Cloud, right? If you're integrating to Splunk Enterprise or Splunk Cloud, you'll want to look at different message boards. In Splunk Observability Cloud, you can use Metric Finder to see if you're ingesting data from Thousandeyes. Look for metrics like "network.latency" or type "network." in the Metric Finder search to see if anything auto-completes. Depending on the type of Thousandeyes tests you're doing, you can also search for things like "http.server.request.availability" Thousandeyes docs might be more helpful: https://docs.thousandeyes.com/product-documentation/integration-guides/custom-built-integrations/opentelemetry/configure-opentelemetry-streams/ui
Hello everyone. I'm trying to set host and sourcetype values with event data. The result is that, the sourcetype is overridden as expected, while the host value is NOT. By applying the following tra... See more...
Hello everyone. I'm trying to set host and sourcetype values with event data. The result is that, the sourcetype is overridden as expected, while the host value is NOT. By applying the following tranforms.conf and props.conf, I expect that The sourcetype is overridden from default:log to mysp(which works as expected). Then, for events with sourcetype mysp, override the host value with my event data using regex extraction(which didn't work).   It's making me confused. Wondering why it didn't work out for host field. Hopefully someone would kindly help me out here. Thanks. transforms.conf [sourcetype_transform] SOURCE_KEY = _raw REGEX = <my_regex> DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::mysp [host_transform] REGEX = <my_regex> FORMAT = host::$1 DEST_KEY = MetaData:Host  props.conf [default:log] TRANSFORMS-force_sourcetype = sourcetype_transform [mysp] TRANSFORMS-force_host = host_transform  
This page might help: https://docs.splunk.com/observability/en/metrics-and-metadata/relatedcontent.html#splunk-infrastructure-monitoring You may want to look at service.name, k8s.cluster.name, k8... See more...
This page might help: https://docs.splunk.com/observability/en/metrics-and-metadata/relatedcontent.html#splunk-infrastructure-monitoring You may want to look at service.name, k8s.cluster.name, k8s.pod.name values to see if they're what you expect.  Other ideas to possibly consider: Are you using the Splunk Helm chart to deploy OTel to your Kubernetes cluster? Did you configure the operator in the Helm chart to instrument your Java app? Did you overwrite or change any resourcedetection processors?
Why i am getting error for one of the indexer from indexer cluster while running a report from particular app. Error is below one. The following error(s) and caution(s) occurred while the search ran... See more...
Why i am getting error for one of the indexer from indexer cluster while running a report from particular app. Error is below one. The following error(s) and caution(s) occurred while the search ran, Therefore, search results might be incomplete. Hide errors. ◦ remote search process failed on peer ◦ Search resuits might be incomplete: the search process on the peerfelog-ldx4.gov.sg] ended prematurely. Check the peer log. such as $SPLUNK_HOME/ar/log/splunk/splunkd.log and as well as the search.log for the particular search. .• [elog-idx04.opsnet.gov.sg] Search process did not exit c exit cleanly, exit_code=111, description="exited with error: Application does not exist: eg_abcapp'. Please look in search.log for this peer in the Job Inspector for more info.    
@rselv21  As @richgalloway  said, These values are generally recommended because they provide a good balance between data availability and storage efficiency.  SF and RF - How much count should we ... See more...
@rselv21  As @richgalloway  said, These values are generally recommended because they provide a good balance between data availability and storage efficiency.  SF and RF - How much count should we keep ? - Splunk Community  Solved: Search factor vs Replication factor-If I change my... - Splunk Community In a clustered environment, the Replication Factor (RF) determines how many copies of each data bucket are maintained across the indexers. For example, with an RF of 3, each data bucket will have 3 copies spread across different indexers. This ensures that if one or two indexers fail, the data is still available on the remaining indexers.  The Search Factor (SF) specifies how many of these replicated copies are searchable. With an SF of 2, two of the replicated copies will be in a searchable state, allowing search heads to query the data even if one of the searchable copies becomes unavailable  With 9 indexers and an RF of 3, each data bucket will be replicated across 3 of the 9 indexers. This means that any given bucket will have 3 copies, ensuring redundancy and high availability.
@br0wall  The "license is expired" error typically occurs when:   You're using a Splunk Enterprise trial license, which expires after 60 days. Your personal installation is not connected to ... See more...
@br0wall  The "license is expired" error typically occurs when:   You're using a Splunk Enterprise trial license, which expires after 60 days. Your personal installation is not connected to your company's Splunk License Master, which manages valid licenses for business accounts. There might be a mix-up between a trial license on your personal computer and your company's enterprise license. If you want to remove those messages, you have two options: 1 - connect your instance to Splunk License Master from your company 2 - Purchase a license for your personal splunk box Note: If you select the option 1, please carefully about the data your are sending to this instance, because all the data that you indexed in your personal box will count against the company license, and maybe you can reach the day capacity license. https://docs.splunk.com/Documentation/Splunk/latest/Admin/HowSplunklicensingworks  Please read this document where you can have more details about how splunk is being licensing. Since you have a business account through your job, your company likely has a valid Splunk license. The issue may arise because your local Splunk installation is using a default trial license instead of connecting to your company's License Master.   https://community.splunk.com/t5/Getting-Data-In/How-to-resolve-quot-Invalid-username-or-password-Your-license-is/m-p/352690  NOTE:  As I said, Splunk Entrerprise Trial Licence is valid for 60 days , I would suggest do complete uninstalltion of splunk in local system and try to install again. Link to download https://www.splunk.com/en_us/download/splunk-enterprise.html  also you can monitor license usgae using  https://docs.splunk.com/Documentation/Splunk/latest/Admin/AboutSplunksLicenseUsageReportView 
Trying to log into splunk, this is my first time putting it on my personal cpu. I have a business account through my job. when i try logging in my password will not work and it says the license is ex... See more...
Trying to log into splunk, this is my first time putting it on my personal cpu. I have a business account through my job. when i try logging in my password will not work and it says the license is expired. 
This is a question only you can answer based on your risk tolerance level and how much storage you have. I recommend RF/SF values of at least 2.  Higher values offer more protection against failure,... See more...
This is a question only you can answer based on your risk tolerance level and how much storage you have. I recommend RF/SF values of at least 2.  Higher values offer more protection against failure, but at the cost of additional storage.
Hi Everyone, Can you please suggest the recommended RF and SF number for Splunk clustered environment with total 9 indexers and 7 search heads in Indexer Clustering set-up.   And also, please let m... See more...
Hi Everyone, Can you please suggest the recommended RF and SF number for Splunk clustered environment with total 9 indexers and 7 search heads in Indexer Clustering set-up.   And also, please let me know the concept regarding how many copies will replicated in each indexers out of 9?
Nice catch about the linux_audit vs. linux_admin. But while I recognize linux_audit, I don't recall ever seeing linux_admin, so that might actually be the typo.