All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In your case the user can define his own field which will always have the value matching that of search filter. The simplest way to do so would be to create a calculated field service="your service... See more...
In your case the user can define his own field which will always have the value matching that of search filter. The simplest way to do so would be to create a calculated field service="your service" And if you rely in your search filter on service="your service" condition - well, that condition will be met for all events effectively rendering this part of the filter useless.
The issue is definitevely that i have to add some indexers and maybe also 1 or 2 SH to cluster. Infrastructure is currently undersized, it can't manage all actual data and jobs. Due to a very high ... See more...
The issue is definitevely that i have to add some indexers and maybe also 1 or 2 SH to cluster. Infrastructure is currently undersized, it can't manage all actual data and jobs. Due to a very high data burst during office time (9 to 17), delays (for very very massive log files) and cpu saturation indexers side, infrastructure can't manage all data/users interaction/scheduled jobs all at once. So Indexers stop responding durings some times. Pipelines is 1, if i raise it to 2 System collapses. Monitoring.Console delined some heavy queries during that time.range that also writes directly on some indexes. But i have my own Dashboard Monitoring.Console on SHs that delines a strong delay for heavy logs (from 15 to 90 minutes before they reach the 0 minutes delay and indexes can write queues), some blocked queues (i have 1000MB size for many queue set) and all that can the easily delines a collapsing infrastructure 🤷‍ Infrastructure grew last months, so it's time to add some servers. I began with a 2 Indexers, then 4, now i really have to go to 8/12. Also Splunk Best-Practices suggests a 12 Indexers Infrastructure for my actual data flow (2-3 TB per day). Meanwhile, i fixed actual situation disabling heavy logs and heavy jobs on SHs 🤷‍ i also lowered the thruput for UFs, from maximum to 10MB/s. System works, but disabling some features and data. Thanks all.
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}... See more...
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000   # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_service TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf   # FIELD EXTRACTION USING A REGEX [f5_waf-extract_service] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = service::$1 WRITE_META = true   # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("service_indexname_mapping.csv", json_object("service", service), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), if(isnotnull(index) and match(index, "_cont$"),  index, indexname), index), service:=null(), indexname:=null()   cat service_indexname_mapping.csv   service,indexname juniper-prod,opco_juniper_prod juniper-non-prod,opco_juniper_non_prod   This is the backend query to route logs from global index to seperate indexes through service name. How to make this service field as indexed field?
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? Wh... See more...
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? What else can he do here? 
I am able to get it worked now. Uninstalled java and installed JRE 17. During install allow JAVA_HOME path and used the same path under Configuration > Settings. Thanks for sharing valuable inputs ... See more...
I am able to get it worked now. Uninstalled java and installed JRE 17. During install allow JAVA_HOME path and used the same path under Configuration > Settings. Thanks for sharing valuable inputs which made me try couple of different options. Appreciated!!!! 
Hi @punyabrata2025  AppDynamics Machine Agent and its extensions, including the MQ extension, rely on the Java version used to run the agent for TLS protocol support. If your Java runtime supports... See more...
Hi @punyabrata2025  AppDynamics Machine Agent and its extensions, including the MQ extension, rely on the Java version used to run the agent for TLS protocol support. If your Java runtime supports TLS 1.3 (Java 11+), then the agent and its extensions can use TLS 1.3. However, as you said, the AppDynamics documentation does not specify any limitation on TLS 1.3 for the MQ extension itself.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @verbal_666  This does sound like a resourcing availability issue. Please can you check the Monitoring Console https://<yourSplunkEnv>:<port>/en-US/app/splunk_monitoring_console/indexing_perform... See more...
Hi @verbal_666  This does sound like a resourcing availability issue. Please can you check the Monitoring Console https://<yourSplunkEnv>:<port>/en-US/app/splunk_monitoring_console/indexing_performance_instance This should highlight and blockage in the pipeline. In the meantime could you also confirm the number of parallelIngestionPipelines set in server.conf? I'd suggest using btool for this: $SPLUNK_HOME/bin/splunk cmd btool server list --debug general What value do you have for parallelIngestionPipelines?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Manjunathmuni  Ive tried to replicate this issue but not had any success. Can I check - do you have any srchFilters, srchTimeEarliest or srchTimeWin set in your authorize.conf for your role? Do... See more...
Hi @Manjunathmuni  Ive tried to replicate this issue but not had any success. Can I check - do you have any srchFilters, srchTimeEarliest or srchTimeWin set in your authorize.conf for your role? Does this affect users in different roles too? I would suggest raising this with Splunk supportto get this raised. In the meantime please confirm the above regarding the role(s).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Again - you are setting wrong path. Your JAVA_PATH should be c:\program files\java\jre_version That's it. No \bin, no java.exe, no nothing. Just the base directory of your Java installation.
Using search-time fields in search filters for limiting user access can be easily bypassed. Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to t... See more...
Using search-time fields in search filters for limiting user access can be easily bypassed. Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to the original search. So - for example - your user searches for index=windows and his role has search filter for EventID=4648 the effective search spawned is  index=windows EventID=4648 And all seems fine and dandy - a user searches only for the given EventID. But a user can just create a calculated field assigning a static value of 4648 to all events. And all events will match the search filter and all events (even those not originally having EventID=4648) will be returned. So search filters should not (at least not when used with search-time fields) be used as access control.
It's still not working. Here are the screenshots. Note :- Upon adding the above JAVA_HOME path, it doesn't show any notification if saved or not. Upon refreshing the page, the path is no longer ... See more...
It's still not working. Here are the screenshots. Note :- Upon adding the above JAVA_HOME path, it doesn't show any notification if saved or not. Upon refreshing the page, the path is no longer there.   Created new JAVA_HOME under Environment Variables but no luck.      
Hi @shoaibalimir  When you assessed and didnt get the required outcome - what is the issue you had specifically? Is this a one-time ingestion of historic files already in S3, or are you wanting to ... See more...
Hi @shoaibalimir  When you assessed and didnt get the required outcome - what is the issue you had specifically? Is this a one-time ingestion of historic files already in S3, or are you wanting to ingest on an ongoing basis (I assume the latter?). Personally I would avoid Generic-S3 as it relies on checkpoint files and can get messy quickly. SQS based S3 is the way to go here I believe.  Check out https://splunk.github.io/splunk-add-on-for-amazon-web-services/SQS-basedS3/ for more details on setting up SQS-based-S3 input. Its also worth nothing that the dynamic parts of the path shouldnt be a problem. If you have requirements to put them into specific indexes depending on the dynamic values then you can configure this when you setup the event notification (https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html) and will probably need multiple SQS. Alternatively you could use props/transforms to route to the correct index at ingest time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @isahu  Can you have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log specifically errors relating to "TcpOutputProc" or "TcpOutputFd". Please also confirm that the outputs.conf is configured a... See more...
Hi @isahu  Can you have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log specifically errors relating to "TcpOutputProc" or "TcpOutputFd". Please also confirm that the outputs.conf is configured as expected using btool: $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug As others have said, the lack of _internal logs from the UF points to an issue with sending outbound, hopefully the above troubleshooting will help determine the cause of the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi , in my company we are using splunk enterprise in cluster struct , i recently update my servers not splunk after that and after restarting splunk deployment server all forwarder are trying to do p... See more...
hi , in my company we are using splunk enterprise in cluster struct , i recently update my servers not splunk after that and after restarting splunk deployment server all forwarder are trying to do phone call and when trying to listen on deployment servers it reciving the calls but when i check clients on forwarder manager section it is empty , what can i do ?
Please stop spamming multiple posts - your question has been asked (again) here - you have been given solutions (which you don't appear to want to use). If anyone can come up with alternatives, they ... See more...
Please stop spamming multiple posts - your question has been asked (again) here - you have been given solutions (which you don't appear to want to use). If anyone can come up with alternatives, they will most likely respond here.
Ok hence I given = for service and index. Hope it will work. Stanzas I have given will it work as expected or srchFilter has any behaviour that can't be defined?
For a search-time field you cannot use the :: syntax.
@gcusello already we have implemented RBAC restricted access to indexes. Now headache started because of this Single Shared Summary index.
Checked in chatgpt and authorise.conf doc and written this. Please help whether this will help if user has access to both these roles. they still need to access non_prod, prod, and summary data restr... See more...
Checked in chatgpt and authorise.conf doc and written this. Please help whether this will help if user has access to both these roles. they still need to access non_prod, prod, and summary data restricted for their service   Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod srchFilter = index=non_prod Below is the role created for prod  [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod OR (index=opco_summary AND service=juniper-prod))   worried about how this srchFilter works across multiple roles (few managers have access to 6-8 AD groups means 6-8 indexes), still they need to see all data including summary data for those 6-8 services.
Hi @Karthikeya , in Splunk, restrictions to access to data is managed at index level, not at app level, in other words, when  you create a role, you should define the indexes that the role can acce... See more...
Hi @Karthikeya , in Splunk, restrictions to access to data is managed at index level, not at app level, in other words, when  you create a role, you should define the indexes that the role can access: e.g. role1 accesses only index1 and role2 only accesses index2, then you can assign a role or both of them to a user depending on your requirements. You can do this in [Settings > Roles > Indexes]. In addition, you can eventually add some restrictions on an index (e.g. on the wineventlog index, a role can access only events with a EventCode IN (4624,4625,4634) instead another role all the events in wineventlog index. You can do this in [Settings > Roles > Restrictions]. Ciao. Giuseppe