All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My alerts only allow me to choose trigger once for each result.     
OK. I was afraid that you tried to install the app using gui and it landed on one SH only. If you pushed it properly from deployer, it should be OK. No, it does not look like a  normal situation (al... See more...
OK. I was afraid that you tried to install the app using gui and it landed on one SH only. If you pushed it properly from deployer, it should be OK. No, it does not look like a  normal situation (although I seem to recall vaguely someone having a similar problem). Are you sure your LB (you must have some HTTP rev-proxy in place) isn't doing something strange with your requests? Are you able to launch the CIM setup properly if you connect directly to one SH?
Hi @Satyapv, as I said, see the Link Switches example in the Splunk Dashboard Examples App, is exactly what you want with the only exception that the button cannot be coloured. Ciao. Giuseppe
Hi @BTB, the difference between an alert and a report is that if you don't any event, you do't receive an alert but you receive an empty report. As attachment of the alert you can have the csv or p... See more...
Hi @BTB, the difference between an alert and a report is that if you don't any event, you do't receive an alert but you receive an empty report. As attachment of the alert you can have the csv or pdf file with all the search results, so why you cannot use an Alert? You have an hot for each result only if you configure one alert for each result, but you can also configure only one alert with all the results in one file. Ciao. Giuseppe
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a re... See more...
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a report but only if it has results. If it has zero results, I don't want it to send. 
I am not using a base search but using two different queries to get the exact count what I want . So one query is this  index=US_WHCRM_int sourcetype="bmw-crm-wh-xl-cms-int-api" ("*Element*: bmw-... See more...
I am not using a base search but using two different queries to get the exact count what I want . So one query is this  index=US_WHCRM_int sourcetype="bmw-crm-wh-xl-cms-int-api" ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*" | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count as Entity which is giving me entity as total fetch count. Other query is this index=US_WHCRM_int (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by title | fields count which is giving me title count as error count. Now I want a success count which can be calculated by subtracting the total fetch count - error count. So how I will get that. Please help me with that. Hope this helps you to understand.
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]I... See more...
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]ID[123456] 16:30:55:002 moduleB:[C5]L[143]F[02]ID[123456] 16:30:56:002 moduleC:[C12]L[143]F[30]ID[123456] 16:30:57:002 moduleD:[C5]L[143]F[7]ID[123456] 16:30:58:002 moduleE:[C1]L[143]F[10]ID[123456] 16:30:59:002 moduleF:[C1]L[143]F[11]ID[123456] 16:30:60:002 moduleZ:[C1]L[143]F[11]ID[123456]   need to find module flow for each transaction and find rare flow.   challenges: 1- there is no specific “key value” exist on all lines belong that transaction.  2-only key value that exist on all line is ID[123456].  3-ID might be duplicated and might belong several transactions. 4-module name not have specific name and there are lots of module names. 5-ID not fixed position (end of each line) any idea? Thanks
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-fiel... See more...
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-field/m-p/422163 I have relatively similar use case, I have multiple columns, the first column is of _time and the remaining column fields are distinct having numeric data for each timestamp. I need to compute the forecast value using the predict command. I tried to use your approach of looping through fields using foreach and then passing it to predict command. However, it takes only one field and its values and computes the forecast value. I need to calculate the same for all the fields returned by the timechart command. Thus, it would be very helpful to seek your inputs on the same. Thank you Taruchit
Upon click it is not expected to go some other dashboard rather load more panels under current panels
Hello, Requirement is to create a button like Submit button with name “Show Details”.When I click Show Details button more panels need to load under existing panels in dashboard. Thanks 
well, unpacked from tgz and pushed from deployer
tgz from splunkbase -> deployer -> SH cluster
Ugh. There is so much going on here that I don't know where to start. 1. Don't use wildcards at the beginning of your search term. It kills your searches performance-wise. 2. You're doing the same ... See more...
Ugh. There is so much going on here that I don't know where to start. 1. Don't use wildcards at the beginning of your search term. It kills your searches performance-wise. 2. You're doing the same (very inefficient) base search twice. That's not the best idea. 3. You needlessly extract many fields but in the end only do stats count as Entity or stats count by title  4. First search gives you one number as a result, the appended search (which will most probably get silently finalized due to exceeding permitted subsearch time and will return _wrong_ results) returns several numbers - one for each title. It seems you don't need any "merging" of two searches but need to design your search from the ground up to get the results you need. But to do so you need to know (and tell us if you want us to help): 1) What does your data look like 2) What do you want to achieve
Thank you for the insight. The "| table  * *" were the columns that all match with variances in AD and unix. I have everything broken down specifically per each index in order to have somewhat of a u... See more...
Thank you for the insight. The "| table  * *" were the columns that all match with variances in AD and unix. I have everything broken down specifically per each index in order to have somewhat of a uniform and sanitary environment. I am having to retake a crash course right now in splunk query. Let me try the method you prescribed and we can continue from there. I'll double check my column headers between the three indexes. I will be more precise in my explanation on my next follow up. Thank you. ..update.. index=cyber AND index=AD | table act, devtype, safe, issuer, username, purpose (for cyber) | table audit, e_user, evnt_cat, evnt_tsk, proc_name,   (for AD) index=cyber AND index=unix | table act, devtype, safe, issuer, username, purpose (for cyber) | table proc, src, user, msg (for unix) Double checked my data. AD and unix searches are never done together, always cyber and one or the other.
So I think i got what i needed: | stats sum(Size of data storage) by _time, "data storage name" Adding Bin added a layer of unnecessary  sum of the values. I tried a | bin span=12h _time . Also... See more...
So I think i got what i needed: | stats sum(Size of data storage) by _time, "data storage name" Adding Bin added a layer of unnecessary  sum of the values. I tried a | bin span=12h _time . Also, I was not able to get the visual correctly with the differentiated colors, had to use the trellis option, and that helped split my graph into 2 different graphs. For now, i can make due with that. But in theory, it should've split it in to different colors on the column chart, one for each data storage.
Hello, by default, DMA summaries are not replicated between nodes in indexer cluster (for warm and cold buckets). I wonder how command tstats with summariesonly=true behaves in case of failing one n... See more...
Hello, by default, DMA summaries are not replicated between nodes in indexer cluster (for warm and cold buckets). I wonder how command tstats with summariesonly=true behaves in case of failing one node in cluster. Imagine, I have 3-nodes, single-site IDX cluster in deafult setting. What happened, when one node fails (so summaries on that node are not available) and I run search using "|tstats summariesonly=true..." on this cluster? If search spans data from primary warm or cold buckets on failed node, will I get incomplete data, right? (I think so, because appropriate summaries are missing). And if so, will I get any error message on search page? And how it change in case of multi-site cluster? I assume in case of failing one node, I should get complete data, becuase AFAIK in multi-site cluster every site has primary copy of bucket with DMA summaries. Is it right or not? I need this info because of one project I am working on. Thank you for answers. Best regards Lukas Mecir
index=US_WHCRM_int   sourcetype="bmw-crm-wh-xl-cms-int-api" ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/b... See more...
index=US_WHCRM_int   sourcetype="bmw-crm-wh-xl-cms-int-api" ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*" | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count as Entity | append     [ search index=US_WHCRM_int (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by title | fields count] | stats values(Entity) as Entity values(title) as title | eval success=title-Entity I am using this query but not getting the correct count please help me with this. Or there is any other option to find the difference between those two counts.
OK. First question - how did you install the CIM add-on?
There might be a better idea, but for example - like this (run-anywhere example). | makeresults | eval test="qa (qa_a_ai_bi1_integra.tio-n_d01)" | table test | rex mode=sed field=test "s/[^-A-Za... See more...
There might be a better idea, but for example - like this (run-anywhere example). | makeresults | eval test="qa (qa_a_ai_bi1_integra.tio-n_d01)" | table test | rex mode=sed field=test "s/[^-A-Za-z0-9.]//g"  
I would also advise to externalise the conifg (the list of wanted  containers) from the search itself. So I'd simply create a lookup (let's call it containers.csv) with just one column called "name"... See more...
I would also advise to externalise the conifg (the list of wanted  containers) from the search itself. So I'd simply create a lookup (let's call it containers.csv) with just one column called "name" containing all the containers you expect and then do   index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name | append [| inputlookup containers.csv ] | stats count by name | where count < 2    This way if your list of containers changes it's easy to just update the lookup instead of rewriting the search.