All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's hard to tell - please can you share your full search?
Hello,    I have a question on a spl request.  I have those extracted fields about the entry data.   I used this spl request :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_a... See more...
Hello,    I have a question on a spl request.  I have those extracted fields about the entry data.   I used this spl request :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_appels_traites" AND (activite_cuid="N1 FULL" OR activite="FULL AC") AND (NOT activite_cuid!="N1 FULL" AND NOT activite!="FULL AC")| stats sum(appels_traites) as "Nbre appels" by date_month] [ search index="idx_arv_ach_tracage" (equipe_travail_libelle="*MAROC N1 AC FULL")             | eval date=strftime(_time,"%Y-%m-%d")             | dedup date,code_alliance_conseiller_responsable,num_client             | chart count(theme_libelle) as "Nbre_de_tracagesD" by date_month ] [search index="idx_arv_ach_cas_traces"  source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") AND (code_resolution="474"OR"836"OR"2836"OR"2893"OR"3085"OR"3137"OR"3244"OR"4340"OR"4365"OR"4784"OR"5893"OR"5896"OR"5897"OR"5901"OR"5909"OR"5914"OR"6744"OR"7150"OR"8020"OR"8531"OR"8534"OR"8535"OR"8548"OR"8549"OR"8709"OR"8876"OR"8917"OR"8919"OR"8946"OR"8961"OR"8962"OR"8970"OR"8974"OR"8998"OR"8999"OR"9000"OR"9001"OR"9004"OR"9006"OR"9007"OR"9010"OR"9011"OR"9012"OR"9048"OR"9052"OR"9058"OR"9059"OR"9069"OR"9088"OR"9089"OR"9090"OR"9095"OR"9107"OR"9108"OR"9116"OR"9148"OR"9150"OR"9169"OR"9184"OR"9190"OR"9207"OR"9208"OR"9209"OR"9211"OR"9214"OR"9223"OR"9239"OR"9240"OR"9241"OR"9248"OR"9251"OR"9274"OR"92752"OR"9276"OR"9288"OR"9299"OR"9300"OR"9302"OR"9323"OR"9324"OR"9366"OR"9382"OR"9385"OR"9447"OR"9450"OR"9455"OR"9466"OR"9467"OR"9476"OR"9516"OR"9559"OR"9584"OR"9603"OR"9627"OR"9633"OR"9640"OR"9654"OR"9670"OR"9710"OR"9735"OR"9740"OR"9782"OR"9784"OR"9785"OR"9786"OR"9794"OR"9817"OR"9839"OR"9919"OR"9932"OR"10000"OR"10010"OR"10017"OR"10022"OR"10048"OR"10049"OR"10053"OR"10081"OR"10099"OR"10100"OR"10103"OR"10104"OR"10105"OR"10116"OR"10118"OR"10142"OR"10143"OR"10153"OR"10160"OR"10162"OR"10165"OR"10185"OR"10189"OR"10190"OR"10191"OR"10199"OR"10206"OR"10209"OR"10216"OR"10229"OR"10233"OR"10241"OR"10256"OR"10278"OR"10280"OR"10288"OR"10289"OR"10290"OR"10299"OR"10330"OR"10331"OR"10367"OR"10432"OR"10474"OR"10496"OR"10499"OR"10506"OR"10524"OR"10525"OR"10526"OR"10527"OR"10528"OR"10530"OR"10531"OR"10532"OR"10534"OR"10535"OR"10536"OR"10537"OR"10538"OR"10540"OR"10541"OR"10543"OR"10557"OR"10558"OR"10560"OR"10561"OR"10579"OR"10592"OR"10675"OR"10676"OR"10677"OR"10678"OR"10680"OR"10681"OR"10704"OR"10748"OR"10759"OR"10760"OR"10764"OR"10766"OR"10768"OR"10769"OR"10770"OR"10771"OR"10783"OR"10798"OR"10799"OR"10832"OR"10857"OR"10862"OR"10875"OR"10928"OR"10929"OR"10933"OR"10934"OR"10941"OR"10947"OR"10962"OR"10966"OR"10969"OR"10977"OR"10978"OR"11017"OR"11085"OR"11114"OR"11115"OR"11116"OR"11138"OR"11139"OR"11140"OR"11141"OR"11142"OR"11143"OR"11144"OR"11219"OR"11252"OR"11239"OR"11268"OR"11326"OR"11327"OR"11328"OR"11329"OR"11410"OR"11514"OR"11552"OR"11992"OR"12012"OR"12032"OR"12033"OR"12034"OR"12035"OR"12036"OR"12037"OR"12038"OR"12039"OR"12040"OR"12041"OR"12152") | chart sum(total) as "Nbre_de_tracagesB" by date_month ] [search index="idx_arv_ach_cas_traces"  source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*")| chart sum(total) as "Nbre_de_tracages_total" by date_month] [search index="idx_arv_ach_enquetes_satisfaction"  source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") | chart count(appreciation) as "Nbre_de_retour_enquete" by date_month] [search index="idx_arv_ach_enquetes_satisfaction"  source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") | eval nb5=case(appreciation="5", 5) | eval nb123=case(appreciation>="1" and appreciation<="3", 3) | eval nb1234=case(appreciation>="1" and appreciation<="4", 4)             | eval nbtotal=case(appreciation>="1" and appreciation<="5", 5)| stats count(nb5) as "Nbre_de_5", count(nb123) as "Nbre_de_123", count(nb1234) as "Nbre_de_1234", count(nbtotal) as "Nbre_total" by  date_month             | eval pourcentage=round((Nbre_de_5/Nbre_total-(Nbre_de_123/Nbre_total))*100,2)." %"             | rename pourcentage as deltaSAT | table deltaSAT date_month] |  stats values("Nbre appels") as "Nbre appels" values("Nbre_de_retour_enquete") as "Nbre_de_retour_enquete" values(Nbre_de_tracagesD) as Nbre_de_tracagesD values("Nbre_de_tracagesB") as "Nbre_de_tracagesB" values("Nbre_de_tracages_total") as "Nbre_de_tracages_total" values(deltaSAT) as deltaSAT by date_month             | eval pourcentage=round((Nbre_de_tracagesB/Nbre_de_tracages_total)*100, 2)." %"             | rename pourcentage as "Tx traçages bloquants"             | eval TxTracage=round((Nbre_de_tracagesD/'Nbre appels')*100,2)." %"             | rename TxTracage as "Tx traçages dédoublonnés"             | rename Nbre_de_tracagesD as "Nbre traçages dédoublonnés"             |eval date_month=case(date_month=="january", "01-Janvier", date_month=="february", "02-Février", date_month=="march", "03-Mars", date_month=="april", "04-Avril", date_month=="may", "05-Mai", date_month=="june", "06-Juin", date_month=="july", "07-Juillet", date_month=="august", "08-Août", date_month=="september", "09-Septembre", date_month=="october", "10-Octobre", date_month=="november", "11-Novembre", date_month=="december", "12-Décembre") | sort date_month | eval date_month=substr(date_month, 4)             | fields date_month, "Tx traçages bloquants", "Nbre appels", "Nbre traçages dédoublonnés", "Tx traçages dédoublonnés"             | transpose 15 header_field=date_month   I obtain this result but I have a problem :  I haven't worked on the date_year field and I don't get a table in a chronoligcal order.    Can you help me please ?      
Thank you, that was really helpful but I still have on last small issue: so what the spl does, if values are missing it adds true to the last ones not the correct ones : Example :  -config Conf... See more...
Thank you, that was really helpful but I still have on last small issue: so what the spl does, if values are missing it adds true to the last ones not the correct ones : Example :  -config Conf-console -ntpsync no --check_core no shoul give : Flag : config Value: Conf Flag : console Value: true Flag : ntpsync Value: no Flag : check_core Value: no but it extract all the values and then add true to the last values so it gives this which is incorrect: Flag : config Value: Conf Flag : console Value: no Flag : ntpsync Value: no Flag : check_core Value: true so I think if you can show me a way to move those without values to the last so they will be matched with the true values, it would be really helpful
If SNOW will interpret a newline character as dividing the field into two lines then this may work. | rex field=summary mode=sed "s/from '(.*)/from\n'\1/"
That returns client and client_ip just fine Seems like splunk cares which index is mentioned furst?
It's best to put sample data and configs in code blocks so the formatting is retained. You say you tried an inline regex in props.   Would you please share that?  The transform shared looks like it ... See more...
It's best to put sample data and configs in code blocks so the formatting is retained. You say you tried an inline regex in props.   Would you please share that?  The transform shared looks like it should have worked.  Please confirm the sourcetype in inputs.conf matches that in props.conf. You have a source that works - how does that working config differ from the non-working one?  Please share the complete stanzas so we can spot any differences.
Hello, Is there anyway that I can put the data from all cloudwatch log group under certain paths , I see that we shouldn't use wildcards. As I have many log groups in certain path and in future th... See more...
Hello, Is there anyway that I can put the data from all cloudwatch log group under certain paths , I see that we shouldn't use wildcards. As I have many log groups in certain path and in future the log group will also be added. It would be an operation over head for team to keep adding when there is new log group created. Eg: /aws/lambda/* , /aws/*  Can you guide me here. I am using splunk add on AWS by pull based mechanism.
Hi All, I have a field called summary in my search - Failed backup of the transaction log for SQL Server database 'model' from 'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'. I am creating this search... See more...
Hi All, I have a field called summary in my search - Failed backup of the transaction log for SQL Server database 'model' from 'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'. I am creating this search for service-now alert and I am sending this summary field value under comments in ServiceNow. I need it to break in two lines like this - Line1-Failed backup of the transaction log for SQL Server database 'model' from Line2-'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'.   How do I implement this in my Search? Thanks In Advance!
Hi! How did you get the http status code on BRUM? Regards,
Hi @scelikok I missed mentioning that Private key can be one of the below format: format 1: PrivateKey : abc.def.ghi.jkl   format 2: PrivateKey :    Meaning it can be empty as well as in for... See more...
Hi @scelikok I missed mentioning that Private key can be one of the below format: format 1: PrivateKey : abc.def.ghi.jkl   format 2: PrivateKey :    Meaning it can be empty as well as in format 1
Also, you have to restart the Splunk system that's on in order for it to take effect. 
I am trying to extract some field values that comes in the following format <date>00-mon-year</date> <DisplayName>example</DisplayName> <Hostname>example</Hostname>   When using the spl re... See more...
I am trying to extract some field values that comes in the following format <date>00-mon-year</date> <DisplayName>example</DisplayName> <Hostname>example</Hostname>   When using the spl rex function i am able to extract the fields using  rex field=_raw "Hostname>(?<hostname>.*)<".   Then i have tried to use both a inline regex function in props with the same regex (without quotes) and it does not work. I have also used a transforms.conf with the stanza as follows [hostname] FORMAT = Hostname::$1 REGEX = Hostname>(?<hostname>(.*?))</Hostname   then in the props REPORT -Hostname = hostname   and this does not work.   However, i have another source that pulls the same type of logs and i am able to use inline regex in spl format just fine with no issues. This issue is only specific to this source which i have as [source::/opt/*]   Any ideas on this fix?
Yes.  IF you have a search head cluster (shc), AND you are trying to edit the config on one of the members instead of on the deployer, THEN that's exactly the message I expect you to get.  It *might... See more...
Yes.  IF you have a search head cluster (shc), AND you are trying to edit the config on one of the members instead of on the deployer, THEN that's exactly the message I expect you to get.  It *might* be possible to get that if you simply don't have some permission or another that's required, but I think those messages are different ones. So - Do you have a search head cluster? If you don't know, then ask your Splunk folks and/or have them manage this config for you. If you are the Splunk person and don't know what I'm saying (and you built it) then you don't have a SHC and we'll have to look into other things.   (Also, please be careful as to *which* "reply" button you click, so we can keep the threads going correctly instead of being willy-nilly all over the place!)
Splunk CrowdStrike Dashboard Also, I think Splunk is using event.DetectID along with search action = allowed and event.DetectID along with search action = blocked but I don't know where these fie... See more...
Splunk CrowdStrike Dashboard Also, I think Splunk is using event.DetectID along with search action = allowed and event.DetectID along with search action = blocked but I don't know where these fields connect to on the CrowdStrike side. Here's an example of what I saw on the Splunk side:  index=security "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* | search action=allowed | stats dc(event.DetectId) as Detections | lookup index=security sourcetype=CrowdStrike:Event:Streams:JSON "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* | search action=blocked | stats dc(event.DetectId) as Detections
I can't imagine anything other than that the regex doesn't match - all else looks fine. AND - the data you provided I think was munged by the editor! Can you repaste that sample event only be SURE ... See more...
I can't imagine anything other than that the regex doesn't match - all else looks fine. AND - the data you provided I think was munged by the editor! Can you repaste that sample event only be SURE to use the </> code button?
| eval Filename=mvindex(split(INTERVAL_FILE,"\\"),-1)
Oh lovely, the "once per day" does wonders for simplifying the problem's edges.  So there's a few different ways to handle this then.  Let's go through some options. I think our base search will ... See more...
Oh lovely, the "once per day" does wonders for simplifying the problem's edges.  So there's a few different ways to handle this then.  Let's go through some options. I think our base search will be something like (index="a" sourcetype="x" "Generating Event gz File for*") OR (index="b" sourcetype="y" "File Processed*") I'm giving you the search piece by piece, with the idea you'll paste each piece in, see what the results are (perhaps with something like `| table *` after it), so you understand what it's doing before you add the next piece.  (Note some are "add this to the end" and others are "replace the last one with this one", so just be aware) Anyway, that's what many of us call a 'data salad'.  Splunk handles messy stuff just fine.  Toss it all in the salad, then later we'll add croutons and dressing.  That should give you all the data - both sides of it. Now, from here you could do something as simple as counting the results.  Add this to the end. | stats count If all is well you will have an answer of 2.  If the process is broken you may get 1, and if it's not run yet today you'll get 0.  This could be used as is, but I feel it's rather plain and the alert will be sort of dumb and uninteresting and without context. The dumb way to make it interesting at the end is eval the count so it says words.  Add this to the end: | eval status = case(if(count==2), "Everything processed correctly.", if(count==1, "Danger Will Robinson, it didn't process right!", true(), "I don't know what's going on, nothing came in today at all!") Now when you run it, you'll get some words that would possibly be useful in the alert! But this is still just kind of "not using the information we have available" So, replacing the entire | stats ... through the end with this new stats + stuff (eg after the base search at the top): | eval generated = if(searchmatch("Generating Event gz File for"), 1,0) | eval processed = if(searchmatch("File Processed"), 1,0) | stats count(generated) AS generated, count(processed) AS processed BY filename | eval status = case(generated == 1 AND processed == 1, "Received and Processed " . Filename, generated == 1 AND processed = 0, "NOT PROCESSED " . filename, true(), "Nothing reported at all")   What that does is, before we stats we create some fields (generated and processed) with a 0 or 1 in them (e.g. false or true).  We group those by filename (just in case!) with the stats, then create a "status" field that's got some information plus the filename. It should work?  I mean, I don't have your data but at least it generates no errors.  Feel free to break it down - start by adding the two evals to see that THEY work right, then add the stats to see if it counts right, etc... Let me know what else this might need to do!  We could include a time so that you could run historical reports... there's all sorts of other things you could do with it.
@ITWhisperer  seems its working partially. I can see only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. cl1=ACD55 cl3=ACD85   Am I missing any ... See more...
@ITWhisperer  seems its working partially. I can see only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. cl1=ACD55 cl3=ACD85   Am I missing any thing here.
"searchable" and "archived" are Splunk Cloud terms, but this question is in the Splunk Enterprise forum.  Please confirm which is in use. In Splunk Cloud, one sets the Searchable Days value in the U... See more...
"searchable" and "archived" are Splunk Cloud terms, but this question is in the Splunk Enterprise forum.  Please confirm which is in use. In Splunk Cloud, one sets the Searchable Days value in the UI or via ACS.  For one year of searching, set the value to 365.  Make sure the maximum size of the index is sufficient to hold the expected volume of data for that time.  Set the archive period by enabling DDAA and entering 730 as the archive time (365 days as searchable plus 365 days archived). In Splunk Enterprise, data is searchable until it is frozen.  There is no archive status unless you implement a coldToFrozenScript or coldToFrozenDir to move the data to a separate location for safe-keeping. These settings are in indexes.conf. For more information, see: https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/ManageIndexes https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Indexesconf https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Setting_data_retention_rules_in_Splunk_Cloud_Platform  
Hi, yes, that's exactly what I did and that fixed the issue in my case :). Thanks ! Ema