All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, I have tried join with type Inner, outer, and left. Nothing gives me accurate results.
Can you please show an example how to do it?
What I want to do is summarize a completed transaction of ActivityID's like Windows updates.  However, I do not know if the ActivityID is reused again after a reboot and may not be a part of the orig... See more...
What I want to do is summarize a completed transaction of ActivityID's like Windows updates.  However, I do not know if the ActivityID is reused again after a reboot and may not be a part of the original transaction or a period of time passes within 24 hours of my reports and the ActivityID is reused again.  Disclaimer, I do not know that much about Microsoft Events... so maybe this sounds all wrong?
Hello, this is old topic but want to know if pushing addon from deployer using encrypted credentials in new local/passwords.conf (previously encrypted by clustered search head) is different in term o... See more...
Hello, this is old topic but want to know if pushing addon from deployer using encrypted credentials in new local/passwords.conf (previously encrypted by clustered search head) is different in term of behavior than configuring addon on search head (web UI) and letting SHC replicating passwords.conf?
As I wrote several times before - _indextime is a field which _can_ be used to troubleshoot your ingestion process _if_ you know your data pipeline and data characteristics - if you know if your even... See more...
As I wrote several times before - _indextime is a field which _can_ be used to troubleshoot your ingestion process _if_ you know your data pipeline and data characteristics - if you know if your event time is reliable, if you have properly configured timestamp extraction, if you know the latency between the event itself and the time the source emits the event (the apache example is a great one here).  
I have an outside SAML system (Okta) which we are using to login to our Splunk system and we are defining indexes for people in different buildings to work against (named after thebuildings). The pro... See more...
I have an outside SAML system (Okta) which we are using to login to our Splunk system and we are defining indexes for people in different buildings to work against (named after thebuildings). The problem is that people move around from buildings to building and they seem to accrete access to virtually every index (building) and we need to stop this by making sure that everyone only gets the access that they need for their building on its own (so creating and revoking access is all controlled within Okta. The other issue is that our organisation moves buildings quite often (due to the nature of the business). So... I have created the following: Okta User->Okta group Splunk Role->Building Index I need to be able to programmatically make the link of SAML Group->Splunk Role. I can read the link between SAML Group and Splunk Role with the REST API using the information in the following page (using /services/admin/SAML-groups), but I cannot find any documentation about creating  and deleting the links. https://docs.splunk.com/Documentation/Splunk/9.2.2/RESTREF/RESTaccess I know that I can maintain the links using the information the below URL, but not programmatically as yet. https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Modifyorremoverolemappings Does anyone know how I can do this programmatically, please?
Hi @Gustavo.Marconi, Have you been able to check out the reply? Did it help? Let us know by clicking the Accept as Solution button or continue the conversation. 
Another thing to bear in mind is what the _time (stamp) means. If it is interpreted from the data in the event, then it is the time that the application has chosen to put into the data. For example,... See more...
Another thing to bear in mind is what the _time (stamp) means. If it is interpreted from the data in the event, then it is the time that the application has chosen to put into the data. For example, with Apache HTTPD logs (and other logs), the timestamp is when the request was received, but the logged event is written when the response was sent back, so is already lagging by whatever the response time of the request was. 12:01:00 - request received by Apache (_time) 12:01:10 - response received by Apache 12:01:11 - event logged by Apache (request time and duration time of 10 seconds) 12:01:14 - event indexed by Splunk (_indextime) As you can see in this example, the difference between _time and _indextime is 14 seconds, but the lag between when the event was written and when it was indexed is only 3 seconds. So, unless the _time value is the time (or as close as possible to the time) that the application wrote the event (so it was available to the forwarders to send to Splunk), calculating the difference between _time and _indextime can represent a number of factors and you need to understand what the values represent to determine whether they are of any value. Having said that, comparing the difference with historic differences may at least give you an insight as to whether there is any degradation/variation, which might be worth investigating.
Hello, anyone already had similar error :  “07-17-2024 14:28:21.721 +0200 WARN PasswordHandler [38267 TcpChannelThread] - Unable to decrypt passwords.conf/[credential:REST_CREDENTIAL#TA-thehive-cor... See more...
Hello, anyone already had similar error :  “07-17-2024 14:28:21.721 +0200 WARN PasswordHandler [38267 TcpChannelThread] - Unable to decrypt passwords.conf/[credential:REST_CREDENTIAL#TA-thehive-cortex#configs/conf-ta_thehive_cortex_account:service_thehivesplunk_cred_sep2:]/password” This was solved by using clear-text or (it seems) pushing addon with SHC "already encrypted password." Thanks.  
The "message" node in the event is an Apache HTTPD access log.  So, apply Splunk's built-in access-extraction to it. ("message" should have been extracted by Splunk already.)     | rename message ... See more...
The "message" node in the event is an Apache HTTPD access log.  So, apply Splunk's built-in access-extraction to it. ("message" should have been extracted by Splunk already.)     | rename message as _raw | extract access-extractions | timechart span=1month sum(bytes) as total_traffic     Your sample will give total_traffic 46989 Here is an emulation using the sample you give, corrected for JSON completion:     | makeresults | eval _raw = "{\"time\":\"2024-07-18T09:29:59.900525659-05:00\",\"stream\":\"stdout\",\"logtag\":\"F\",\"message\":\"10.42.11.59 - - [18/Jul/2024:14:29:59 +0000] \\\"POST / HTTP/1.1\\\" 200 46989 \\\"-\\\" \\\"Microsoft Office/16.0 (Windows NT 10.0; Microsoft Word 16.0.17628; Pro)\\\"\",\"kubernetes\":{\"pod_name\":\"apache-4\",\"namespace_name\":\"some name\"}" | spath | eval _time = strptime(time, "%FT%T.%9N%z") ``` data emulation ```     Play with it and compare with real data
Hi @woodcock this is old topic however just want to know if pushing addon from deployer using encrypted credentials in new local/passwords.conf (previously encrypted by clustered search head) is diff... See more...
Hi @woodcock this is old topic however just want to know if pushing addon from deployer using encrypted credentials in new local/passwords.conf (previously encrypted by clustered search head) is different in term of behavior   than configuring addon on search head (web UI) and letting SHC replicating passwords.conf?
Make your selector token "24hour", "7day", etc. Let's call it $span_tok$. This should do. index=myindex earliest=-$span_tok$-$span_tok$ | timechart span=$span_tok$ count | streamstats delta(count) a... See more...
Make your selector token "24hour", "7day", etc. Let's call it $span_tok$. This should do. index=myindex earliest=-$span_tok$-$span_tok$ | timechart span=$span_tok$ count | streamstats delta(count) as pct_change | eval pct_change = pct_change / (count - pct_change) * 100 The idea is simple, look back 2x $span_tok$, then calculate delta on the go. 
Since _time and _indextime are expressed in seconds their difference will be in seconds as well. But to make things more complicated, while for many sources it's desired state to have low latency, t... See more...
Since _time and _indextime are expressed in seconds their difference will be in seconds as well. But to make things more complicated, while for many sources it's desired state to have low latency, there can be cases where significant latency is normal (especially if events are ingested in batches).
Hi @mfbma, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that y... See more...
Hi @mfbma, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Any update on this error?
No.   Let me explain with example. Event with timestamp 3:14 comes at 3:17 Event with timestamp 3:15 comes at 3:18 That means if you make a search at 3:18 with earliest=-2m@m latest=now This m... See more...
No.   Let me explain with example. Event with timestamp 3:14 comes at 3:17 Event with timestamp 3:15 comes at 3:18 That means if you make a search at 3:18 with earliest=-2m@m latest=now This means it will search event between 3:16 to 3:18 Locally this will not include any events, never. Because events are always 3 minutes delay. Solution is to never search last 3 minutes when writing search and wait for those events to populate in the next schedule with: earliest=-63m@m latest=-3m@m OR earliest=13m@m latest=-3m@m OR earliest can be anything but keep the latest in a way that it never search recent events, to keep events being missed issue.   There is another solution as well with _index_earliest and _index_latest, but that's topic for another time. (a bit complicated)   I hope this helps!!
Hi @gcusello, We have all Drupal sites and the raw log likes this: {"time":"2024-07-18T09:29:59.900525659-05:00","stream":"stdout","logtag":"F","message":"10.42.11.59 - - [18/Jul/2024:14:29:59 +000... See more...
Hi @gcusello, We have all Drupal sites and the raw log likes this: {"time":"2024-07-18T09:29:59.900525659-05:00","stream":"stdout","logtag":"F","message":"10.42.11.59 - - [18/Jul/2024:14:29:59 +0000] \"POST / HTTP/1.1\" 200 46989 \"-\" \"Microsoft Office/16.0 (Windows NT 10.0; Microsoft Word 16.0.17628; Pro)\"","kubernetes":{"pod_name":"apache-4","namespace_name" ... We want to calculate total bandwidth.
Since Microsoft Teams is deprecated 0365 connectors standard incoming webhooks and usage of MessageType cards for sending message This Microsoft Teams messages publication addon is not working for w... See more...
Since Microsoft Teams is deprecated 0365 connectors standard incoming webhooks and usage of MessageType cards for sending message This Microsoft Teams messages publication addon is not working for workflow endpoint. Also using standard webhook and providing workflow URL is returning errors since the payload is not in the format of the Adaptable card API message that workflows expect do you have a solution how to connect alerts with Microsoft teams channels now since this depreciation of connectors
Hello   I'd like to create a single value viz that displays the percent change from a pint in time to now. Basically, I have a dashboard that has a panel that simply counts the number of records ... See more...
Hello   I'd like to create a single value viz that displays the percent change from a pint in time to now. Basically, I have a dashboard that has a panel that simply counts the number of records in the given timerange. The time is a simple time picker and the base search is a simple: index=myindex | stats count I would like to add a panel, maybe single viz, that shows a percent change. For example, if the default is "Last 24 hours" I would like to show the count of the last 24 hours and the percent change from the previous 24 hours. Additionally, if the user selected "Last 7 days" i would like it to give the count of the last 7 days and the percent change from 7 days before that.   Thanks for the help
While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).       index=_introspection component=PerProce... See more...
While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).       index=_introspection component=PerProcess host=<any one SH or IDX host> | timechart span=5s sum(data.mem_used) as mem_usedMB by data.process_type useother=f usenull=f       Example   If memory usage by `search-launcher` is way higher than `search` Then idle search process pool(search-launcher) is wasting system memory. If you see above trend, we want to reduce idle search process pool. There are several options to reduce idle search process pool in limits.conf One option is to set enable_search_process_long_lifespan = false in server.conf( new option in 9.1 and above)   enable_search_process_long_lifespan = <boolean> * Controls whether the search process can have a long lifespan. * Configuring a long lifespan on a search process can optimize performance by reducing the number of new processes that are launched and old processes that are reaped, and is a more efficient use of system resources. * When set to "true": Splunk software does the following: * Suppresses increases in the configuration generation. See the 'conf_generation_include' setting for more information. * Avoids unnecessary replication of search configuration bundles. * Allows a certain number of idle search processes to live. * Sets the size of the pool of search processes. * Checks memory usage before a search process is reused. * When set to "false": The lifespan of a search process at the 50th percentile is approximately 30 seconds. * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true   Why idle search process pool appears to be un-used(more idle searches compared to the actual number of searches running on peer)? Before a search request is dispatched to peers, SHCs/SHs also need to first find  the common knowledge bundle across peers. On peer, only an idle search process created with matching common knowledge bundle is eligible for re-use. That's why in most cases idle search process pool remains un-used as overall idle search process pool is a collection of idle search processes  having association with different knowledge bundles.  Now think of a scenario having multiple SHC clusters (example ES/ITSI/ad-hoc etc). Each SH cluster replicating it's own knowledge bundles. The idle search process pool is a collection of idle search processes  having association with different knowledge bundles from different search heads. You can search enable_search_process_long_lifespan in limits.conf for the impact. It  controls lot of configs.  But the main reason for memory growth is  max_search_process_pool (default 2048 idle search process pool). max_search_process_pool = auto | <integer> * The maximum number of search processes that can be launched to run searches in the pool of preforked search processes. * The setting is valid if the 'enable_search_process_long_lifespan' setting in the server.conf file is set to "true". * Use this setting to limit the total number of running search processes on a search head or peer that is prevented from being overloaded or using high system resources (CPU, Memory, etc). * When set to "auto": Splunk server determines the pool size by multiplying the number of CPU cores and the allowed number of search processes (16). The pool size is 64 at minimum. * When set to "-1" or another negative value: The pool size is not limited. * Has no effect on Windows or if "search_process_mode" is not "auto". * Default: 2048   If an instance is running 1000 searches per minute, assuming bundle replication is not frequent, why to create 2048 idle searches pool when the max requirement is 1000? With surplus memory resource, this is not an issue. 2048 idle searches pool is not ok for limited memory instances.