All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}... See more...
cat props.conf [opco_sony] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000   # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_service TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf   # FIELD EXTRACTION USING A REGEX [f5_waf-extract_service] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = service::$1 WRITE_META = true   # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("service_indexname_mapping.csv", json_object("service", service), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), if(isnotnull(index) and match(index, "_cont$"),  index, indexname), index), service:=null(), indexname:=null()   cat service_indexname_mapping.csv   service,indexname juniper-prod,opco_juniper_prod juniper-non-prod,opco_juniper_non_prod   This is the backend query to route logs from global index to seperate indexes through service name. How to make this service field as indexed field?
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? Wh... See more...
@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? What else can he do here? 
I am able to get it worked now. Uninstalled java and installed JRE 17. During install allow JAVA_HOME path and used the same path under Configuration > Settings. Thanks for sharing valuable inputs ... See more...
I am able to get it worked now. Uninstalled java and installed JRE 17. During install allow JAVA_HOME path and used the same path under Configuration > Settings. Thanks for sharing valuable inputs which made me try couple of different options. Appreciated!!!! 
Hi @punyabrata2025  AppDynamics Machine Agent and its extensions, including the MQ extension, rely on the Java version used to run the agent for TLS protocol support. If your Java runtime supports... See more...
Hi @punyabrata2025  AppDynamics Machine Agent and its extensions, including the MQ extension, rely on the Java version used to run the agent for TLS protocol support. If your Java runtime supports TLS 1.3 (Java 11+), then the agent and its extensions can use TLS 1.3. However, as you said, the AppDynamics documentation does not specify any limitation on TLS 1.3 for the MQ extension itself.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @verbal_666  This does sound like a resourcing availability issue. Please can you check the Monitoring Console https://<yourSplunkEnv>:<port>/en-US/app/splunk_monitoring_console/indexing_perform... See more...
Hi @verbal_666  This does sound like a resourcing availability issue. Please can you check the Monitoring Console https://<yourSplunkEnv>:<port>/en-US/app/splunk_monitoring_console/indexing_performance_instance This should highlight and blockage in the pipeline. In the meantime could you also confirm the number of parallelIngestionPipelines set in server.conf? I'd suggest using btool for this: $SPLUNK_HOME/bin/splunk cmd btool server list --debug general What value do you have for parallelIngestionPipelines?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Manjunathmuni  Ive tried to replicate this issue but not had any success. Can I check - do you have any srchFilters, srchTimeEarliest or srchTimeWin set in your authorize.conf for your role? Do... See more...
Hi @Manjunathmuni  Ive tried to replicate this issue but not had any success. Can I check - do you have any srchFilters, srchTimeEarliest or srchTimeWin set in your authorize.conf for your role? Does this affect users in different roles too? I would suggest raising this with Splunk supportto get this raised. In the meantime please confirm the above regarding the role(s).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Again - you are setting wrong path. Your JAVA_PATH should be c:\program files\java\jre_version That's it. No \bin, no java.exe, no nothing. Just the base directory of your Java installation.
Using search-time fields in search filters for limiting user access can be easily bypassed. Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to t... See more...
Using search-time fields in search filters for limiting user access can be easily bypassed. Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to the original search. So - for example - your user searches for index=windows and his role has search filter for EventID=4648 the effective search spawned is  index=windows EventID=4648 And all seems fine and dandy - a user searches only for the given EventID. But a user can just create a calculated field assigning a static value of 4648 to all events. And all events will match the search filter and all events (even those not originally having EventID=4648) will be returned. So search filters should not (at least not when used with search-time fields) be used as access control.
It's still not working. Here are the screenshots. Note :- Upon adding the above JAVA_HOME path, it doesn't show any notification if saved or not. Upon refreshing the page, the path is no longer ... See more...
It's still not working. Here are the screenshots. Note :- Upon adding the above JAVA_HOME path, it doesn't show any notification if saved or not. Upon refreshing the page, the path is no longer there.   Created new JAVA_HOME under Environment Variables but no luck.      
Hi @shoaibalimir  When you assessed and didnt get the required outcome - what is the issue you had specifically? Is this a one-time ingestion of historic files already in S3, or are you wanting to ... See more...
Hi @shoaibalimir  When you assessed and didnt get the required outcome - what is the issue you had specifically? Is this a one-time ingestion of historic files already in S3, or are you wanting to ingest on an ongoing basis (I assume the latter?). Personally I would avoid Generic-S3 as it relies on checkpoint files and can get messy quickly. SQS based S3 is the way to go here I believe.  Check out https://splunk.github.io/splunk-add-on-for-amazon-web-services/SQS-basedS3/ for more details on setting up SQS-based-S3 input. Its also worth nothing that the dynamic parts of the path shouldnt be a problem. If you have requirements to put them into specific indexes depending on the dynamic values then you can configure this when you setup the event notification (https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html) and will probably need multiple SQS. Alternatively you could use props/transforms to route to the correct index at ingest time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @isahu  Can you have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log specifically errors relating to "TcpOutputProc" or "TcpOutputFd". Please also confirm that the outputs.conf is configured a... See more...
Hi @isahu  Can you have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log specifically errors relating to "TcpOutputProc" or "TcpOutputFd". Please also confirm that the outputs.conf is configured as expected using btool: $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug As others have said, the lack of _internal logs from the UF points to an issue with sending outbound, hopefully the above troubleshooting will help determine the cause of the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi , in my company we are using splunk enterprise in cluster struct , i recently update my servers not splunk after that and after restarting splunk deployment server all forwarder are trying to do p... See more...
hi , in my company we are using splunk enterprise in cluster struct , i recently update my servers not splunk after that and after restarting splunk deployment server all forwarder are trying to do phone call and when trying to listen on deployment servers it reciving the calls but when i check clients on forwarder manager section it is empty , what can i do ?
Please stop spamming multiple posts - your question has been asked (again) here - you have been given solutions (which you don't appear to want to use). If anyone can come up with alternatives, they ... See more...
Please stop spamming multiple posts - your question has been asked (again) here - you have been given solutions (which you don't appear to want to use). If anyone can come up with alternatives, they will most likely respond here.
Ok hence I given = for service and index. Hope it will work. Stanzas I have given will it work as expected or srchFilter has any behaviour that can't be defined?
For a search-time field you cannot use the :: syntax.
@gcusello already we have implemented RBAC restricted access to indexes. Now headache started because of this Single Shared Summary index.
Checked in chatgpt and authorise.conf doc and written this. Please help whether this will help if user has access to both these roles. they still need to access non_prod, prod, and summary data restr... See more...
Checked in chatgpt and authorise.conf doc and written this. Please help whether this will help if user has access to both these roles. they still need to access non_prod, prod, and summary data restricted for their service   Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod srchFilter = index=non_prod Below is the role created for prod  [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod OR (index=opco_summary AND service=juniper-prod))   worried about how this srchFilter works across multiple roles (few managers have access to 6-8 AD groups means 6-8 indexes), still they need to see all data including summary data for those 6-8 services.
Hi @Karthikeya , in Splunk, restrictions to access to data is managed at index level, not at app level, in other words, when  you create a role, you should define the indexes that the role can acce... See more...
Hi @Karthikeya , in Splunk, restrictions to access to data is managed at index level, not at app level, in other words, when  you create a role, you should define the indexes that the role can access: e.g. role1 accesses only index1 and role2 only accesses index2, then you can assign a role or both of them to a user depending on your requirements. You can do this in [Settings > Roles > Indexes]. In addition, you can eventually add some restrictions on an index (e.g. on the wineventlog index, a role can access only events with a EventCode IN (4624,4625,4634) instead another role all the events in wineventlog index. You can do this in [Settings > Roles > Restrictions]. Ciao. Giuseppe
@richgalloway @PickleRick I checked in chatgpt and explored authorise.conf and thought of using below. Please check and verify and let me know will it works -- Below is the role created for non-prod... See more...
@richgalloway @PickleRick I checked in chatgpt and explored authorise.conf and thought of using below. Please check and verify and let me know will it works -- Below is the role created for non-prod. [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod SrchFilter = index = non_prod Below is the role created for prod  [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod) OR (index=opco_summary AND service=juniper-prod) Still confused on = and :: index and service both are not indexes fields hence used =.
Sorry for everyone that I am posting multiple posts for my issue. Just summarising everything here.. please help me with the solution... we created a single summary index to all applications and afr... See more...
Sorry for everyone that I am posting multiple posts for my issue. Just summarising everything here.. please help me with the solution... we created a single summary index to all applications and afraid of giving access to them because any of them see that there can see other's apps summary data, it will be a security issue right. We have created a dashboard with summary index and disabled open in search. At some point, we need to give them access to summary index and what if they search index=* then their restricted index and this summary index shows up which can be risky. Is there any way we can restrict users running index=*. NOTE - already we are using RBAC to restrict users to their specific indexes. But this summary index will show summarised data of all. Any way to restrict this? However in dashboard we are restricting them by a field should be selected then only panel with summary index shows up by filtering. How people handle this type of situations? We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD groups and 1 user will have access to both these 2 groups. Being single summary index, thought of filtering it at role level using srchFilter and service field, so that to restrict one user seeing other apps summary data...Extracted service field from raw data and ingested it into summary index so that it will pick service field values. Then I will use this field in srchFilter to restrict users. We only need summary index for prod data (indexes) not non-prod data... Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod Below is the role created for prod [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod OR (index=opco_summary service=juniper-prod) In other post I received comment that indexed fields will use :: but here these two fields (index, service) are not indexes fields, hence given = Here my doubt is when the user with these two roles if they can search only index=non_prod if he see results or not? How this search works in backend? Is there any way to test? And few users are part of 6-8 AD groups (6-8 indexes). How this srchFilter work here? Please clarify.. But what if user runs index=non_prod... Can he still see non_prod logs or not? If there is no other way rather than creating seperate summary index for each application, we need to do it. But is there any way we can do it fast rather than doing it manually? But again I don't have coding knowledge to auomate this.