All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I... See more...
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I also add to admin, user, power roles "default: index=index_of_duo". But, simply it want add tag (dont understand why if above eventtype search is working)
I don't have that option. Would that happen to be in the advanced edit? 
Checked each SH individually, they all behave exactly the same. I find the app in the management page, I klick the "launch app" link and I end up here: <splunkURI>:8000/en-GB/app/Splunk_SA_CIM/ta_n... See more...
Checked each SH individually, they all behave exactly the same. I find the app in the management page, I klick the "launch app" link and I end up here: <splunkURI>:8000/en-GB/app/Splunk_SA_CIM/ta_nix_configuration
...
The CIM setup page now works just fine, it is the "launch app" or using the URI for the Splunk_SA_CIM app directly which lands me on the Splunkt_TA_Linux configuration page. I have not noticed this ... See more...
The CIM setup page now works just fine, it is the "launch app" or using the URI for the Splunk_SA_CIM app directly which lands me on the Splunkt_TA_Linux configuration page. I have not noticed this behaviour for any other app so far, everything else seems to be working just fine. It is possible, of course, that the problem exists outside of Splunk. At least I know this is not the expected behaviour, so now I just have to figure out why I cannot access the app without landing on the wrong page
Hi @BTB, if you select Once (not "For each result"), you have only one file with all the results. Ciao. Giuseppe
My alerts only allow me to choose trigger once for each result.     
OK. I was afraid that you tried to install the app using gui and it landed on one SH only. If you pushed it properly from deployer, it should be OK. No, it does not look like a  normal situation (al... See more...
OK. I was afraid that you tried to install the app using gui and it landed on one SH only. If you pushed it properly from deployer, it should be OK. No, it does not look like a  normal situation (although I seem to recall vaguely someone having a similar problem). Are you sure your LB (you must have some HTTP rev-proxy in place) isn't doing something strange with your requests? Are you able to launch the CIM setup properly if you connect directly to one SH?
Hi @Satyapv, as I said, see the Link Switches example in the Splunk Dashboard Examples App, is exactly what you want with the only exception that the button cannot be coloured. Ciao. Giuseppe
Hi @BTB, the difference between an alert and a report is that if you don't any event, you do't receive an alert but you receive an empty report. As attachment of the alert you can have the csv or p... See more...
Hi @BTB, the difference between an alert and a report is that if you don't any event, you do't receive an alert but you receive an empty report. As attachment of the alert you can have the csv or pdf file with all the search results, so why you cannot use an Alert? You have an hot for each result only if you configure one alert for each result, but you can also configure only one alert with all the results in one file. Ciao. Giuseppe
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a re... See more...
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a report but only if it has results. If it has zero results, I don't want it to send. 
I am not using a base search but using two different queries to get the exact count what I want . So one query is this  index=US_WHCRM_int sourcetype="bmw-crm-wh-xl-cms-int-api" ("*Element*: bmw-... See more...
I am not using a base search but using two different queries to get the exact count what I want . So one query is this  index=US_WHCRM_int sourcetype="bmw-crm-wh-xl-cms-int-api" ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*" | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count as Entity which is giving me entity as total fetch count. Other query is this index=US_WHCRM_int (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by title | fields count which is giving me title count as error count. Now I want a success count which can be calculated by subtracting the total fetch count - error count. So how I will get that. Please help me with that. Hope this helps you to understand.
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]I... See more...
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]ID[123456] 16:30:55:002 moduleB:[C5]L[143]F[02]ID[123456] 16:30:56:002 moduleC:[C12]L[143]F[30]ID[123456] 16:30:57:002 moduleD:[C5]L[143]F[7]ID[123456] 16:30:58:002 moduleE:[C1]L[143]F[10]ID[123456] 16:30:59:002 moduleF:[C1]L[143]F[11]ID[123456] 16:30:60:002 moduleZ:[C1]L[143]F[11]ID[123456]   need to find module flow for each transaction and find rare flow.   challenges: 1- there is no specific “key value” exist on all lines belong that transaction.  2-only key value that exist on all line is ID[123456].  3-ID might be duplicated and might belong several transactions. 4-module name not have specific name and there are lots of module names. 5-ID not fixed position (end of each line) any idea? Thanks
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-fiel... See more...
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-field/m-p/422163 I have relatively similar use case, I have multiple columns, the first column is of _time and the remaining column fields are distinct having numeric data for each timestamp. I need to compute the forecast value using the predict command. I tried to use your approach of looping through fields using foreach and then passing it to predict command. However, it takes only one field and its values and computes the forecast value. I need to calculate the same for all the fields returned by the timechart command. Thus, it would be very helpful to seek your inputs on the same. Thank you Taruchit
Upon click it is not expected to go some other dashboard rather load more panels under current panels
Hello, Requirement is to create a button like Submit button with name “Show Details”.When I click Show Details button more panels need to load under existing panels in dashboard. Thanks 
well, unpacked from tgz and pushed from deployer
tgz from splunkbase -> deployer -> SH cluster
Ugh. There is so much going on here that I don't know where to start. 1. Don't use wildcards at the beginning of your search term. It kills your searches performance-wise. 2. You're doing the same ... See more...
Ugh. There is so much going on here that I don't know where to start. 1. Don't use wildcards at the beginning of your search term. It kills your searches performance-wise. 2. You're doing the same (very inefficient) base search twice. That's not the best idea. 3. You needlessly extract many fields but in the end only do stats count as Entity or stats count by title  4. First search gives you one number as a result, the appended search (which will most probably get silently finalized due to exceeding permitted subsearch time and will return _wrong_ results) returns several numbers - one for each title. It seems you don't need any "merging" of two searches but need to design your search from the ground up to get the results you need. But to do so you need to know (and tell us if you want us to help): 1) What does your data look like 2) What do you want to achieve
Thank you for the insight. The "| table  * *" were the columns that all match with variances in AD and unix. I have everything broken down specifically per each index in order to have somewhat of a u... See more...
Thank you for the insight. The "| table  * *" were the columns that all match with variances in AD and unix. I have everything broken down specifically per each index in order to have somewhat of a uniform and sanitary environment. I am having to retake a crash course right now in splunk query. Let me try the method you prescribed and we can continue from there. I'll double check my column headers between the three indexes. I will be more precise in my explanation on my next follow up. Thank you. ..update.. index=cyber AND index=AD | table act, devtype, safe, issuer, username, purpose (for cyber) | table audit, e_user, evnt_cat, evnt_tsk, proc_name,   (for AD) index=cyber AND index=unix | table act, devtype, safe, issuer, username, purpose (for cyber) | table proc, src, user, msg (for unix) Double checked my data. AD and unix searches are never done together, always cyber and one or the other.