All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

the regex is just an example, its not the real one since the regex is not the issue here the purpose of this step is because we need to separate the logs per domain so my question is if the props.c... See more...
the regex is just an example, its not the real one since the regex is not the issue here the purpose of this step is because we need to separate the logs per domain so my question is if the props.conf example is the right way or maybe there is different way to do it ?
each index is for different domain  we want to split the logs per domain
I know this post is super old but just for the sake of having another possible solution written down somewhere, the following has solved it for me (based on what was discussed in this thread): keep ... See more...
I know this post is super old but just for the sake of having another possible solution written down somewhere, the following has solved it for me (based on what was discussed in this thread): keep the sourcetype in the universal forwarder's app props.conf with INDEXED_EXTRACTIONS = json [HurricaneMTA_Advanced] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured pulldown_type = true TIMESTAMP_FIELDS = Timestamp TIME_FORMAT = %FT%T.%7N%:z SHOULD_LINEMERGE = true KV_MODE = none disabled = false   Add a sourcetype in the props.conf in some app on the search head with KV_MODE set to none:   [HurricaneMTA_Advanced] KV_MODE = none disabled = false
Hi @sarit_s6 , as also @PickleRick and @marnall said, the only resons to have different indexes are different retentions and grant accesses, even if you have a big index: dimension isn't an issue fo... See more...
Hi @sarit_s6 , as also @PickleRick and @marnall said, the only resons to have different indexes are different retentions and grant accesses, even if you have a big index: dimension isn't an issue for the indexes. Remember that Splunk isn't a database and that indexes aren't tables! Event if also following your bad idea (bad because you need to create and manage many indexes without any apparent reason), it's possible to dinamically assign the index name extracting the index name from the logs. In addition your regex it's very heavy for your system (you have many groups .* in your regex and one of them at the begininning of the regex) and you're giving a completely unuseful overload to your system. You can check the performaces of your regex in regex101.com. Ciao. Giuseppe
This long dispatch phase means that it is taking very long for Splunk to spawn search to your indexer. At first glance it would suggest network problems (are your both components on prem or in cloud?... See more...
This long dispatch phase means that it is taking very long for Splunk to spawn search to your indexer. At first glance it would suggest network problems (are your both components on prem or in cloud? If in cloud are they in the same cloud zone?) or some DNS issues (so that some timeouts must happen).
This issue first occurred intermittently after upgrading from 9.0.5 to 9.2.1 Splunk Enterprise on a Linux kernel. In place upgrade from 9.2.1 to 9.2.2 didn't fixed the issue neither But comment... See more...
This issue first occurred intermittently after upgrading from 9.0.5 to 9.2.1 Splunk Enterprise on a Linux kernel. In place upgrade from 9.2.1 to 9.2.2 didn't fixed the issue neither But commenting out this line as such fixed it. #CFUNCTYPE(c_int)(lambda: None) Also tested on another box with the same issue, I've commented out the line first and then upgraded from 9.2.1 to 9.2.2, the upgrade override the fix, and I have to re-apply the fix again. Definitely raising a Splunk support case for this one Thank you!
Hi @Arsenii.Shub , Thank you for posting on community. I saw you raised a support case already for this. Hence, I would like to share what was the solution, result of my experimentation, and additi... See more...
Hi @Arsenii.Shub , Thank you for posting on community. I saw you raised a support case already for this. Hence, I would like to share what was the solution, result of my experimentation, and additional information. Issue: The URLs shown in BT/Transaction Snapshots are incomplete. Goal: Differentiate slow search requests in the system caused by specific user input. Tests: I tested the URL behavior on a .NET MVC web app. Solutions: URL Display on URL Column: While it’s not possible to show the full URL with  http://host/ , we can display the URL as  /Search/userInput . Reference: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/custom-match-rules/net-business-transaction-detection/name-mvc-transactions-by-area-controller-and-action#id-.NameMVCTransactionsbyArea,Controller,andActionv23.1-MVCTransactionNaming Complete URL Display on BT name Column: It is possible to display the complete URL  https://host/Search/userInput  in the BT name. Reference: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/uri-based-entry-points Next Steps: For Partial URL in URL column  /Search/userInput : Add App Server Agent Configuration. Set the following .NET Agent Configuration properties to false: aspdotnet-core-naming-controlleraction aspdotnet-core-naming-controllerarea     Restart the  AppDynamics.Agent.Coordinator_service and IIS in the same sequence. After that, apply loads and check the BT/Snapshot if necessary. For Complete URL in BT name  https://host/Search/userInput : Navigate to Configuration > Instrumentation > Transaction Detection in your Application. Add New Rules: Choose  Include , proper Agent type, and Current Entry Point. Fill in the Name Field (it will be shown on your BT). Set Priority higher than Default Automatic detection for prioritization.     Rule Configuration: Matching condition:  URL is not empty Custom Expression: ${HttpRequest.Scheme}://${HttpRequest.Host}${HttpRequest.Path}${HttpRequest.QueryString} Restart the  AppDynamics.Agent.Coordinator_service  and  IIS in the same sequence. After that, apply loads and check the BT/Snapshot if necessary. Final Result: Additional Information: You can also add the custom expression by modifying the default Auto detection rule instead off Add new one like how I did in the step above. Result from modifying the default auto detection below.    
Another technique you can use is to make use of TERM(xx) search - TERM() searches are much faster than raw data searches and let's assume your uri is  /partner/a/b/c/d you can do  index=tomcat TER... See more...
Another technique you can use is to make use of TERM(xx) search - TERM() searches are much faster than raw data searches and let's assume your uri is  /partner/a/b/c/d you can do  index=tomcat TERM(a) TERM(b) TERM(c) TERM(d) uri=/partner/a/b/c/d it will depend on how unique the terms are, but it will certainly provide a way to reduce the amount of data looked at. In the job properties, look at the scanCount property that will show you the number of events scanned to provide the results.  
Is the search slow to return just the last 60 minutes of data and does the performance degraded linearly as you increase the time interval. How many events do you get per 24h period? Are you just d... See more...
Is the search slow to return just the last 60 minutes of data and does the performance degraded linearly as you increase the time interval. How many events do you get per 24h period? Are you just doing a raw event search for 7 days to demonstrate the problem or is this part of your use case? Take a look at the job properties phase_0 property to see what your expanded search is. You can look at the monitoring console to see what the Splunk server metrics are looking like - perhaps there is a memory issue - take a look at the resource usage dashboards.  
According to this chart, I single indexer should be enough for the volume of data.  A lot depends on the number of searches being run, however, something Splunk's chart tries to capture in the "numbe... See more...
According to this chart, I single indexer should be enough for the volume of data.  A lot depends on the number of searches being run, however, something Splunk's chart tries to capture in the "number of users" figures. If you have fewer than 24 users, but still do a lot of searching then it may be worthwhile to add an indexer or two. Once the data is re-balanced among the indexers, each will perform a fraction of the work and the search should complete in a fraction of the current time. Also, consider adding a sourcetype specifier to the base search as that can help improve performance.
You can try sourcetype rename https://docs.splunk.com/Documentation/Splunk/latest/Data/Renamesourcetypes  
I've been using a free version of Splunk Cloud, creating dashboards over the past couple of days - it's been great. Last night when I tried to login using my password I got this message     For se... See more...
I've been using a free version of Splunk Cloud, creating dashboards over the past couple of days - it's been great. Last night when I tried to login using my password I got this message     For security reasons, your account has been locked out. Please try again later or contact your system administrator.     As far as I know, I am the administrator. I cannot find a way to change settings through the splunk.com account I used to login.
You could duplicate the field extractions (and more) applying to sourcetype xyz, then change them to apply to that new sourcetype of xyz:iis:prod.
Out of curiosity - why do you want to split those events into separate indexes? Different retention periods? Access differences?
DDSS is a form of storage for your Cloud instance. It's an equivalent of moving your frozen buckets to S3 storage. If you want to store your data for a longer period you might simply set up a separat... See more...
DDSS is a form of storage for your Cloud instance. It's an equivalent of moving your frozen buckets to S3 storage. If you want to store your data for a longer period you might simply set up a separate storage unit for frozen buckets and archive them away. Be aware though that such data needs to be thawed to be usable again.
Right. That was !=, not =. You're mostly interested in index=_internal component=AutoLoadBalancedConnectionStrategy host=<your_forwarder>
I have two weeks off, so I'll continue troubleshooting after that. In my opinion there are not any interesting stuff in _internal log. You can see it on the screenshot. I used cluster command to red... See more...
I have two weeks off, so I'll continue troubleshooting after that. In my opinion there are not any interesting stuff in _internal log. You can see it on the screenshot. I used cluster command to reduce log number. There is component != metric in SPL.    
Yes but keep in mind that this will not affect events that are currently in the one big index. New incoming events will be routed to other indexes if they match the corresponding transform regex. Ev... See more...
Yes but keep in mind that this will not affect events that are currently in the one big index. New incoming events will be routed to other indexes if they match the corresponding transform regex. Every transform in props.conf will be tried against the logs that match the stanza. This means that if a regex in a transform matches the event, then the index value of the event will be overwritten. If multiple regexes in the transforms match an event, then that event will be overwritten multiple times and will retain the value of the last transform whose regex matched. Therefore you should make the regexes strict so that logs that should go to newIndex do not accidentally go into newIndex1.
Hello I have one big index with lots of files which I want to reroute logs from there to different indexes The reroute will be by regex who is looking for the domain name in the logs For each doma... See more...
Hello I have one big index with lots of files which I want to reroute logs from there to different indexes The reroute will be by regex who is looking for the domain name in the logs For each domain i will create separate stanza in transforms.conf  for example : [setIdx-index1] REGEX = ^(?!.*{ "workflow_id": .*, "workflow_type": .*, "workflow_name": .*, "jira_ticket": .*, "actor": .*, "deployment_status": .*, "start_time": .*, "end_time": .*, ("app_name"|"additional_data"): .* }).*$ FORMAT = new_index DEST_KEY = _MetaData:Index LOOKAHEAD = 40000 my question is about props.conf how should i configure it if i have more than 1 index ? [index1] TRANSFORMS-setIdx = setIdx-index1 TRANSFORMS-setIdx2 = newIndex TRANSFORMS-setIdx3 = newIndex1 TRANSFORMS-setIdx4 = newIndex2 should it work ?
That means that your installation has not completed successfully. If you try to run the installer again does it start a clean installation or does it offer to repair/uninstall? Do you have a service ... See more...
That means that your installation has not completed successfully. If you try to run the installer again does it start a clean installation or does it offer to repair/uninstall? Do you have a service which should be starting the Splunk process in your system? Do your eventlogs say anything reasonable about the installation process? There might also be a log file from the installation in the %temp% directory (it should be called MSIsomething.log). You can also try to install Splunk again this time explicitly requesting to create a installation log. https://learn.microsoft.com/en-gb/windows/win32/msi/command-line-options?redirectedfrom=MSDN https://docs.splunk.com/Documentation/Splunk/9.2.2/Installation/InstallonWindowsviathecommandline