All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This breaks Splunk running in Rosetta on ARM based M1,M2,M3,M4 mac computers.   Previously Splunk can be run smoothly in Rosetta emulated Linux VM on new macs.  
@Manjunathmuni How are you producing that output for earliestTime and latestTime. Please share the query that produces that output, because those two times do not show the 15 minute preset range. Pl... See more...
@Manjunathmuni How are you producing that output for earliestTime and latestTime. Please share the query that produces that output, because those two times do not show the 15 minute preset range. Please also open the job inspector from a search you have run with those SPL values and then open the job properties at the bottom of that page and look for earliestTime and latestTime and post those. They will be of the format 2025-07-28T00:31:00.000+01:00, not the same as your output.
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s... See more...
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*) FORMAT = srv::$1 ver::$2 msg::$3 DEST_KEY = _meta This regex is supposed to extract the following from a log like: Jul 27 14:10:05 x.y.z.k 1 2025-07-27T14:09:05Z QQQ123-G12-W4-AB iLO6 - - - iLO time update failed. Unable to contact NTP server. Expected extracted fields: srv = QQQ123-G12-W4-AB ver = iLO6 msg = iLO time update failed. Unable to contact NTP server. The regex works correctly when tested independently, and all three groups are matched. However, in Splunk, only the first two fields (srv and ver) are extracted correctly. The msg field only includes the first word: iLO. It seems Splunk is stopping at the first space for the msg field, despite the regex using (.*) at the end. Any idea what could be causing this behavior? Is there a setting or context where Splunk treats fields as single-token values by default? Any advice would be appreciated!
It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your... See more...
It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your "restrictions". Also you technically can edit the built-in stash sourcetype it's just very very very not recommended to do so. As I said before - you can index the summary back into the original index but it might not be the best idea due to - as I assume - greatly different amount of summary data vs. original data. So the best practice is to have a separate summary index for each group you have to grant access rights separately. There are other options which are... technically possible but noone will advise them because they have their downsides and might not work properly (at least not in all cases). Asking again and again doesn't change the fact that the proper way to go is to have separate indexes. If for some reasons you can't do that, you're left with the already described alternatives of which each has its cons.
@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app ind... See more...
@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app index (appA index -- opco_appA summary index also opco_appA? 
Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had ... See more...
Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had service as indexed field, in the summary events it will be a normal search-time extracted field. And generally you shouldn't fiddle with the default internal Splunk sourcetypes.
@PickleRick then if I do service as a indexed field.. will it solve my problem or is there any chance that this can be violated at some point? 
In your case the user can define his own field which will always have the value matching that of search filter. The simplest way to do so would be to create a calculated field service="your service... See more...
In your case the user can define his own field which will always have the value matching that of search filter. The simplest way to do so would be to create a calculated field service="your service" And if you rely in your search filter on service="your service" condition - well, that condition will be met for all events effectively rendering this part of the filter useless.
The issue is definitevely that i have to add some indexers and maybe also 1 or 2 SH to cluster. Infrastructure is currently undersized, it can't manage all actual data and jobs. Due to a very high ... See more...
The issue is definitevely that i have to add some indexers and maybe also 1 or 2 SH to cluster. Infrastructure is currently undersized, it can't manage all actual data and jobs. Due to a very high data burst during office time (9 to 17), delays (for very very massive log files) and cpu saturation indexers side, infrastructure can't manage all data/users interaction/scheduled jobs all at once. So Indexers stop responding durings some times. Pipelines is 1, if i raise it to 2 System collapses. Monitoring.Console delined some heavy queries during that time.range that also writes directly on some indexes. But i have my own Dashboard Monitoring.Console on SHs that delines a strong delay for heavy logs (from 15 to 90 minutes before they reach the 0 minutes delay and indexes can write queues), some blocked queues (i have 1000MB size for many queue set) and all that can the easily delines a collapsing infrastructure 🤷‍ Infrastructure grew last months, so it's time to add some servers. I began with a 2 Indexers, then 4, now i really have to go to 8/12. Also Splunk Best-Practices suggests a 12 Indexers Infrastructure for my actual data flow (2-3 TB per day). Meanwhile, i fixed actual situation disabling heavy logs and heavy jobs on SHs 🤷‍ i also lowered the thruput for UFs, from maximum to 10MB/s. System works, but disabling some features and data. Thanks all.
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}... See more...
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000   # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_service TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf   # FIELD EXTRACTION USING A REGEX [f5_waf-extract_service] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = service::$1 WRITE_META = true   # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("service_indexname_mapping.csv", json_object("service", service), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), if(isnotnull(index) and match(index, "_cont$"),  index, indexname), index), service:=null(), indexname:=null()   cat service_indexname_mapping.csv   service,indexname juniper-prod,opco_juniper_prod juniper-non-prod,opco_juniper_non_prod   This is the backend query to route logs from global index to seperate indexes through service name. How to make this service field as indexed field?
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? Wh... See more...
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? What else can he do here? 
I am able to get it worked now. Uninstalled java and installed JRE 17. During install allow JAVA_HOME path and used the same path under Configuration > Settings. Thanks for sharing valuable inputs ... See more...
I am able to get it worked now. Uninstalled java and installed JRE 17. During install allow JAVA_HOME path and used the same path under Configuration > Settings. Thanks for sharing valuable inputs which made me try couple of different options. Appreciated!!!! 
Hi @punyabrata2025  AppDynamics Machine Agent and its extensions, including the MQ extension, rely on the Java version used to run the agent for TLS protocol support. If your Java runtime supports... See more...
Hi @punyabrata2025  AppDynamics Machine Agent and its extensions, including the MQ extension, rely on the Java version used to run the agent for TLS protocol support. If your Java runtime supports TLS 1.3 (Java 11+), then the agent and its extensions can use TLS 1.3. However, as you said, the AppDynamics documentation does not specify any limitation on TLS 1.3 for the MQ extension itself.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @verbal_666  This does sound like a resourcing availability issue. Please can you check the Monitoring Console https://<yourSplunkEnv>:<port>/en-US/app/splunk_monitoring_console/indexing_perform... See more...
Hi @verbal_666  This does sound like a resourcing availability issue. Please can you check the Monitoring Console https://<yourSplunkEnv>:<port>/en-US/app/splunk_monitoring_console/indexing_performance_instance This should highlight and blockage in the pipeline. In the meantime could you also confirm the number of parallelIngestionPipelines set in server.conf? I'd suggest using btool for this: $SPLUNK_HOME/bin/splunk cmd btool server list --debug general What value do you have for parallelIngestionPipelines?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Manjunathmuni  Ive tried to replicate this issue but not had any success. Can I check - do you have any srchFilters, srchTimeEarliest or srchTimeWin set in your authorize.conf for your role? Do... See more...
Hi @Manjunathmuni  Ive tried to replicate this issue but not had any success. Can I check - do you have any srchFilters, srchTimeEarliest or srchTimeWin set in your authorize.conf for your role? Does this affect users in different roles too? I would suggest raising this with Splunk supportto get this raised. In the meantime please confirm the above regarding the role(s).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Again - you are setting wrong path. Your JAVA_PATH should be c:\program files\java\jre_version That's it. No \bin, no java.exe, no nothing. Just the base directory of your Java installation.
Using search-time fields in search filters for limiting user access can be easily bypassed. Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to t... See more...
Using search-time fields in search filters for limiting user access can be easily bypassed. Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to the original search. So - for example - your user searches for index=windows and his role has search filter for EventID=4648 the effective search spawned is  index=windows EventID=4648 And all seems fine and dandy - a user searches only for the given EventID. But a user can just create a calculated field assigning a static value of 4648 to all events. And all events will match the search filter and all events (even those not originally having EventID=4648) will be returned. So search filters should not (at least not when used with search-time fields) be used as access control.
It's still not working. Here are the screenshots. Note :- Upon adding the above JAVA_HOME path, it doesn't show any notification if saved or not. Upon refreshing the page, the path is no longer ... See more...
It's still not working. Here are the screenshots. Note :- Upon adding the above JAVA_HOME path, it doesn't show any notification if saved or not. Upon refreshing the page, the path is no longer there.   Created new JAVA_HOME under Environment Variables but no luck.      
Hi @shoaibalimir  When you assessed and didnt get the required outcome - what is the issue you had specifically? Is this a one-time ingestion of historic files already in S3, or are you wanting to ... See more...
Hi @shoaibalimir  When you assessed and didnt get the required outcome - what is the issue you had specifically? Is this a one-time ingestion of historic files already in S3, or are you wanting to ingest on an ongoing basis (I assume the latter?). Personally I would avoid Generic-S3 as it relies on checkpoint files and can get messy quickly. SQS based S3 is the way to go here I believe.  Check out https://splunk.github.io/splunk-add-on-for-amazon-web-services/SQS-basedS3/ for more details on setting up SQS-based-S3 input. Its also worth nothing that the dynamic parts of the path shouldnt be a problem. If you have requirements to put them into specific indexes depending on the dynamic values then you can configure this when you setup the event notification (https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html) and will probably need multiple SQS. Alternatively you could use props/transforms to route to the correct index at ingest time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @isahu  Can you have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log specifically errors relating to "TcpOutputProc" or "TcpOutputFd". Please also confirm that the outputs.conf is configured a... See more...
Hi @isahu  Can you have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log specifically errors relating to "TcpOutputProc" or "TcpOutputFd". Please also confirm that the outputs.conf is configured as expected using btool: $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug As others have said, the lack of _internal logs from the UF points to an issue with sending outbound, hopefully the above troubleshooting will help determine the cause of the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing