All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I assume you have already tried these or similar openssl commands? openssl x509 -in certname.crt -out certname.pem -outform PEM openssl x509 -inform DER -in certname.crt -out certname.pem -text Co... See more...
I assume you have already tried these or similar openssl commands? openssl x509 -in certname.crt -out certname.pem -outform PEM openssl x509 -inform DER -in certname.crt -out certname.pem -text Could you also try renaming the .crt directly to .pem? You might be lucky and it will already be in the PEM format.
Hi @mfonisso, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend tha... See more...
Hi @mfonisso, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hello Team, As per https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/Knowledgebundlereplication "The search head needs to distribute this material to its search peers so that they can p... See more...
Hello Team, As per https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/Knowledgebundlereplication "The search head needs to distribute this material to its search peers so that they can properly execute queries on its behalf" Or "The knowledge bundle consists of a set of files that the search peers ordinarily need in order to perform their searches" Could you please give me one example why we really need it ? I had the impression that to return search results to SH indexer just need SPL query and it's locally indexed data + metadata. One of my guesses for a good example were: lookup files, but i guess indexer should not need any lookup files since that job is done be search head, not indexer. The same with other KO objects like tags, event types, macros etc...-> those objects should not be needed on the indexer to perform search, those are used by search head to enrich data returned by the indexer. Another theory: we distribute those files not to help with searching, but with parsing and indexing (for example using props.conf and transforms.conf). Maybe that is the case ? Extra question: the conf files delivered in the bundle: if i do understand correctly those settings are in memory only, not modifying any existing conf files on the indexer ? But at the same modifying memory settings for (for example) index.conf ? If so - i should be able to run "splunk btool indexes list" to see something different  then "splunk show" -> to compare the diff between current configuration files versus those sent from bundle and applied in memory ? What are the best practices here ? What am i missing ? Thanks, Michal  
Hi @Fadil.Chalakandy, Looking into this, will report back. 
Try something like this (index=default-va6* sourcetype="myengine-stage" "API call is True for MyEngine" OR "Target language count") OR (index=platform-va6 sourcetype="platform-ue*" "testabc-123" "M... See more...
Try something like this (index=default-va6* sourcetype="myengine-stage" "API call is True for MyEngine" OR "Target language count") OR (index=platform-va6 sourcetype="platform-ue*" "testabc-123" "Marked request as") | rex field=_raw "Marked request as (?<finalStatus>\w+).+ x-request-id: (?<reqID>.+)" | rex field=_raw "request_id=(?<reqID>.+?) - .+(Target language count|API call is True for MyEngine)" | rex field=_raw "Target language count (?<num_target>\d+)" | rex field=_raw "API call is (?<callTrue>True) for MyEngine" | stats first(num_target) as num_target first(callTrue) as callTrue first(finalStatus) as finalStatus by reqID | where callTrue=="True" AND isnotnull(num_target)
Hey thanks for the comment ! I will formulate my question better and share more information as needed 
Hi @Dean.Marchetti, Thanks for asking your question on the Community. Have you found a solution or a workaround you could share? If you have not yet, you can also try contacting AppDynanics Support... See more...
Hi @Dean.Marchetti, Thanks for asking your question on the Community. Have you found a solution or a workaround you could share? If you have not yet, you can also try contacting AppDynanics Support: How do I submit a Support ticket? An FAQ 
Hello marnall, First of all, thank you for your reply. I thought the "wildcard" option was only used for fields containing spaces or commas. And that with the symbol * it didn't work. I'll try thi... See more...
Hello marnall, First of all, thank you for your reply. I thought the "wildcard" option was only used for fields containing spaces or commas. And that with the symbol * it didn't work. I'll try this quickly and report back here. Regards
 how can this be done for a windows system please
Thanks for help!!! The initial query cannot be replaced with "share query" because the backend engine processes "Logtext - 'target language count'" with a numeric count value for each request/... See more...
Thanks for help!!! The initial query cannot be replaced with "share query" because the backend engine processes "Logtext - 'target language count'" with a numeric count value for each request/process. Only a subset of these requests triggers an API call, marked as true. Consequently, the engine proceeds to process subsequent logtext events for those requests where the API call value is true. Therefore, my objective is to capture all such requests with a true API call and obtain the corresponding target language count values. This allows me to multiply the language count by the total count of processed requests. Your suggested query retrieves all events across all processed requests where the language count was processed. Additionally, there's another index, the "platform index," to which the engine processes events with the final request status marked as either "succeed" or "failed." In line with my previous objective, I aim to extract the final status from this index for requests where the API call was true. Consequently, I am attempting to integrate the initial query with a new query for the platform index. Below are sample events for reference from both of indexes when running the search query: My Purpose as per Initial shared query : I need to identify request IDs with "API Call" as true, get the count of "Target_Lang" for these IDs, calculate total API calls by multiplying num_lang count with request ID count, and further fetch final status from platform index for filtered request IDs, and determine the count of failed/successful API calls based on status.     Sub_Query_1: index=default-va6 sourcetype="myengine-stage" ("API call is True for MyEngine" OR "Target language count") "testabc-123" Event-1: 2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-123 - user-id=test01 Target language count 1 Event-2: 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-123 - user-id=test01 API call is True for MyEngine Sub_Query_2: index=platform-va6 sourcetype="platform-ue*" "testabc-123" "Marked request as" 29/03/2024 18:01:20.556 message: Marked request as succeed. {"status":"PREDICT_SUCCESS"} x-request-id: testabc-123      
Hi, Thanks for quick response. I have tried both the options below: Option-1 | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother... See more...
Hi, Thanks for quick response. I have tried both the options below: Option-1 | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|eval status=round(status/1000000,2)."M" Option-2 | eval status = if(match(status, "20/[0-1]/"), "success(200 and 201)",status)| eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|eval count=round(count/1000000,2)."M" But in my graph I dont see any difference. I still see large number instead of shorten number with M appended. Below is the output This is the output which still shows large number.        
It depends on what this number is supposed to represent
I would like to create a scheduled search sending multi-line Slack notification via Splunk API.  I can create the search, there's no problem. Slack notification also works, but only limit to a singl... See more...
I would like to create a scheduled search sending multi-line Slack notification via Splunk API.  I can create the search, there's no problem. Slack notification also works, but only limit to a single line notification. I would like to split the notification into multi-lines. I am using "Slack Notification Alert" App and I have tried a few characters like "\n", "\r", "<br />", "\" and none of them worked. It seems that all of these are escaped and the Slack message is still a one-liner like "test\ntest" instead of "test test" Of course I can use a browser to go to Splunk web UI and change it there but we need to do this in scale and changing it manually instead of via API is not efficient at all. Please help, thanks a lot! Slack Notification Alert
Thank you! The first appendpipe achieved the desired objective! The size constraint should not be a problem because I had all the unixtime snapped to the month with @mon so there's only 300 rows in t... See more...
Thank you! The first appendpipe achieved the desired objective! The size constraint should not be a problem because I had all the unixtime snapped to the month with @mon so there's only 300 rows in this table. The way to explain this odd situation is that each day, we get the data dump of the population but their field values may change by the day. The issue is that Splunk has a 90 day data retention policy for our events. So basing events purely on _time only goes back 90 days. BUT, in our events, there are additional unixtime fields (two to be exact) that go back much further than 90 days and we needed to use these to provide a historical month by month view (hence snapping unixtime with @mon). Total_A was the total sum of the population over time based on Unixtime_A, and Total_B is a conditional sum of the population where only a field met a condition, and Unixtime_B contained the time this condition was first met. That's why I wanted Total_A and Total_B to be seperate, but Unixtime_A and Unixtime_B could be appended together. To put some context to it, Total_A is total vulnerabilities population regardless of whether it was fixed or active based on Unixtime_A being when it was first detected. Total_B is total fixed vulnerabilities population based on Unixtime_B being when it was fixed.
Hello, Yes sorry i meant deploymentclient.conf, i didn't configure HF as a client at all. All I did was pointing the client towards the HF and turning and forwarding on in the HF aswell.
apologies for all the parenthesis, I was just trying to keep things straight in my head. There's definitely a better way to frame the query.  I tried what you suggested with: if(id.resp_h="front en... See more...
apologies for all the parenthesis, I was just trying to keep things straight in my head. There's definitely a better way to frame the query.  I tried what you suggested with: if(id.resp_h="front end",resp_bytes,0) even simplifying the expression to filter on one ip address at a time gives an error. trying to use it like this  index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h=192.168.0.1,resp_bytes,0))+(if(id.orig_h=192.168.0.1,orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) I just get an error back from splunk.  Error in EvalCommand: the number 192.168.0.1 is invalid
Can I do this buy source?  and does  source can take different props conf.
After attending a Splunk 9.2 webinar yesterday (3/28/24), I pulled a fresh docker container down using the  "latest" tag and found that I had v.9.0.9 rather than v.9.2.1. Is it possible that this is ... See more...
After attending a Splunk 9.2 webinar yesterday (3/28/24), I pulled a fresh docker container down using the  "latest" tag and found that I had v.9.0.9 rather than v.9.2.1. Is it possible that this is a  reoccurrence  of a build issue mentioned in this old post https://community.splunk.com/t5/Deployment-Architecture/Why-is-Docker-latest-not-on-most-recent-version/td-p/600958 ?
how do I check my resources, please? although up until 2 days ago my Splunk has been operating well
So here is my understanding and the way that I've got our on-prem instance configured. hot buckets are stored on a local flash array.  When the bucket closes, it keeps the closed bucket on the flash... See more...
So here is my understanding and the way that I've got our on-prem instance configured. hot buckets are stored on a local flash array.  When the bucket closes, it keeps the closed bucket on the flash drive and writes a copy to the S3 storage.  The S3 storage copy is considered to be the 'master copy'.   I try not to use the term 'warm bucket', but instead use 'cached bucket'.  All searches are performed on either hot or cached buckets on the local flash array.  Cached buckets are eligible for eviction from local storage by the cache manager.  So if your search needs a bucket that is not on the local storage, it will evict eligible cached buckets, retrieve the buckets from S3 storage and then perform the search. The frozenTimePeriod defines our overall retention time.  We use hotlist_recency_secs to define when a cached bucket is eligible for eviction.  That is. buckets less than the hotlist_recency_secs age are not eligible for eviction.  Our statistics show that probably 90% of the queries have a time span of 7 days or less (research gosplunk.com for query).  Thus, by setting the hotlist_recency_sec to 14 days, we are ensured that the search buckets are on local, searchable storage w/o having to reach out to the S3 storage (which is slower). One last thing.  We need a 1 yr searchable retention.  However, we also need to keep 30 months total retention.  To accomplish this, I use ingest actions to the S3 storage.  Ingest actions will write the events in compressed json format by year, months, day, and sourcetype.   Hope this helps.