All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kevhead , sorry, it was a mistyping: in that installation I had a lookup containg some additional informa that you can delete from the dashboard. Ciao. Giuseppe
For me, it is going to be ongoing thing and not a one time effort. So wondering if there is a way to achieve this
Nice, I tried this and looks like it is working. Question: Does this mean only a part of my log file will be ingested so I am not using the whole log's disk space in my License ? Actually I only want... See more...
Nice, I tried this and looks like it is working. Question: Does this mean only a part of my log file will be ingested so I am not using the whole log's disk space in my License ? Actually I only want to ingest a part of my debug logs (which are huge). Also, can we line break the events after this conversion so we have different events again after ingestion. @darrenfuller @woodcock 
@gcusello Thank you for providing this code for the dashboard. I've implemented it and its working quite well except for the hardware portion which returns a " Error in 'lookup' command: Could not co... See more...
@gcusello Thank you for providing this code for the dashboard. I've implemented it and its working quite well except for the hardware portion which returns a " Error in 'lookup' command: Could not construct lookup 'Server, host, OUTPUT, IP, Tipologia'. See search.log for more details". Any assistance with this would be great thank you!
Hello to everyone! I have a curious situation: I have log files that I collecting via SplunkUF This log file does not contain a whole timestamp - one part of the timestamp is contained in the file... See more...
Hello to everyone! I have a curious situation: I have log files that I collecting via SplunkUF This log file does not contain a whole timestamp - one part of the timestamp is contained in the file name, and the other is placed directly in the event As I found in the other answers, I have options. 1. INGEST_EVAL on the indexer layer: I did not understand how I could take one part from the source and glue it with _raw data Link to the answer 2. Use handmade script to create a valid timestamp for events - this is more understandable for me, but it looks like "reinventing the wheel" So the question is, may I use the first option if it is possible? This is the an example of the source: E:\logs\rmngr_*\24020514.log * - some number 24 - Year Month - 02 Day - 04 Hour - 14 And this is an example of the event: 45:50.152011-0,CONN,3,process=rmngr,p:processName=RegMngrCntxt,p:processName=ServerJobExecutorContext,OSThread=15348,t:clientID=64658,t:applicationName=ManagerProcess,t:computerName=hostname01,Txt=Clnt: DstUserName1: user@domain.com StartProtocol: 0 Success 45:50.152011 - Minute, Second and Subsecond  
Works Great! Thank you
That's hard to tell - please can you share your full search?
Hello,    I have a question on a spl request.  I have those extracted fields about the entry data.   I used this spl request :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_a... See more...
Hello,    I have a question on a spl request.  I have those extracted fields about the entry data.   I used this spl request :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_appels_traites" AND (activite_cuid="N1 FULL" OR activite="FULL AC") AND (NOT activite_cuid!="N1 FULL" AND NOT activite!="FULL AC")| stats sum(appels_traites) as "Nbre appels" by date_month] [ search index="idx_arv_ach_tracage" (equipe_travail_libelle="*MAROC N1 AC FULL")             | eval date=strftime(_time,"%Y-%m-%d")             | dedup date,code_alliance_conseiller_responsable,num_client             | chart count(theme_libelle) as "Nbre_de_tracagesD" by date_month ] [search index="idx_arv_ach_cas_traces"  source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") AND (code_resolution="474"OR"836"OR"2836"OR"2893"OR"3085"OR"3137"OR"3244"OR"4340"OR"4365"OR"4784"OR"5893"OR"5896"OR"5897"OR"5901"OR"5909"OR"5914"OR"6744"OR"7150"OR"8020"OR"8531"OR"8534"OR"8535"OR"8548"OR"8549"OR"8709"OR"8876"OR"8917"OR"8919"OR"8946"OR"8961"OR"8962"OR"8970"OR"8974"OR"8998"OR"8999"OR"9000"OR"9001"OR"9004"OR"9006"OR"9007"OR"9010"OR"9011"OR"9012"OR"9048"OR"9052"OR"9058"OR"9059"OR"9069"OR"9088"OR"9089"OR"9090"OR"9095"OR"9107"OR"9108"OR"9116"OR"9148"OR"9150"OR"9169"OR"9184"OR"9190"OR"9207"OR"9208"OR"9209"OR"9211"OR"9214"OR"9223"OR"9239"OR"9240"OR"9241"OR"9248"OR"9251"OR"9274"OR"92752"OR"9276"OR"9288"OR"9299"OR"9300"OR"9302"OR"9323"OR"9324"OR"9366"OR"9382"OR"9385"OR"9447"OR"9450"OR"9455"OR"9466"OR"9467"OR"9476"OR"9516"OR"9559"OR"9584"OR"9603"OR"9627"OR"9633"OR"9640"OR"9654"OR"9670"OR"9710"OR"9735"OR"9740"OR"9782"OR"9784"OR"9785"OR"9786"OR"9794"OR"9817"OR"9839"OR"9919"OR"9932"OR"10000"OR"10010"OR"10017"OR"10022"OR"10048"OR"10049"OR"10053"OR"10081"OR"10099"OR"10100"OR"10103"OR"10104"OR"10105"OR"10116"OR"10118"OR"10142"OR"10143"OR"10153"OR"10160"OR"10162"OR"10165"OR"10185"OR"10189"OR"10190"OR"10191"OR"10199"OR"10206"OR"10209"OR"10216"OR"10229"OR"10233"OR"10241"OR"10256"OR"10278"OR"10280"OR"10288"OR"10289"OR"10290"OR"10299"OR"10330"OR"10331"OR"10367"OR"10432"OR"10474"OR"10496"OR"10499"OR"10506"OR"10524"OR"10525"OR"10526"OR"10527"OR"10528"OR"10530"OR"10531"OR"10532"OR"10534"OR"10535"OR"10536"OR"10537"OR"10538"OR"10540"OR"10541"OR"10543"OR"10557"OR"10558"OR"10560"OR"10561"OR"10579"OR"10592"OR"10675"OR"10676"OR"10677"OR"10678"OR"10680"OR"10681"OR"10704"OR"10748"OR"10759"OR"10760"OR"10764"OR"10766"OR"10768"OR"10769"OR"10770"OR"10771"OR"10783"OR"10798"OR"10799"OR"10832"OR"10857"OR"10862"OR"10875"OR"10928"OR"10929"OR"10933"OR"10934"OR"10941"OR"10947"OR"10962"OR"10966"OR"10969"OR"10977"OR"10978"OR"11017"OR"11085"OR"11114"OR"11115"OR"11116"OR"11138"OR"11139"OR"11140"OR"11141"OR"11142"OR"11143"OR"11144"OR"11219"OR"11252"OR"11239"OR"11268"OR"11326"OR"11327"OR"11328"OR"11329"OR"11410"OR"11514"OR"11552"OR"11992"OR"12012"OR"12032"OR"12033"OR"12034"OR"12035"OR"12036"OR"12037"OR"12038"OR"12039"OR"12040"OR"12041"OR"12152") | chart sum(total) as "Nbre_de_tracagesB" by date_month ] [search index="idx_arv_ach_cas_traces"  source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*")| chart sum(total) as "Nbre_de_tracages_total" by date_month] [search index="idx_arv_ach_enquetes_satisfaction"  source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") | chart count(appreciation) as "Nbre_de_retour_enquete" by date_month] [search index="idx_arv_ach_enquetes_satisfaction"  source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") | eval nb5=case(appreciation="5", 5) | eval nb123=case(appreciation>="1" and appreciation<="3", 3) | eval nb1234=case(appreciation>="1" and appreciation<="4", 4)             | eval nbtotal=case(appreciation>="1" and appreciation<="5", 5)| stats count(nb5) as "Nbre_de_5", count(nb123) as "Nbre_de_123", count(nb1234) as "Nbre_de_1234", count(nbtotal) as "Nbre_total" by  date_month             | eval pourcentage=round((Nbre_de_5/Nbre_total-(Nbre_de_123/Nbre_total))*100,2)." %"             | rename pourcentage as deltaSAT | table deltaSAT date_month] |  stats values("Nbre appels") as "Nbre appels" values("Nbre_de_retour_enquete") as "Nbre_de_retour_enquete" values(Nbre_de_tracagesD) as Nbre_de_tracagesD values("Nbre_de_tracagesB") as "Nbre_de_tracagesB" values("Nbre_de_tracages_total") as "Nbre_de_tracages_total" values(deltaSAT) as deltaSAT by date_month             | eval pourcentage=round((Nbre_de_tracagesB/Nbre_de_tracages_total)*100, 2)." %"             | rename pourcentage as "Tx traçages bloquants"             | eval TxTracage=round((Nbre_de_tracagesD/'Nbre appels')*100,2)." %"             | rename TxTracage as "Tx traçages dédoublonnés"             | rename Nbre_de_tracagesD as "Nbre traçages dédoublonnés"             |eval date_month=case(date_month=="january", "01-Janvier", date_month=="february", "02-Février", date_month=="march", "03-Mars", date_month=="april", "04-Avril", date_month=="may", "05-Mai", date_month=="june", "06-Juin", date_month=="july", "07-Juillet", date_month=="august", "08-Août", date_month=="september", "09-Septembre", date_month=="october", "10-Octobre", date_month=="november", "11-Novembre", date_month=="december", "12-Décembre") | sort date_month | eval date_month=substr(date_month, 4)             | fields date_month, "Tx traçages bloquants", "Nbre appels", "Nbre traçages dédoublonnés", "Tx traçages dédoublonnés"             | transpose 15 header_field=date_month   I obtain this result but I have a problem :  I haven't worked on the date_year field and I don't get a table in a chronoligcal order.    Can you help me please ?      
Thank you, that was really helpful but I still have on last small issue: so what the spl does, if values are missing it adds true to the last ones not the correct ones : Example :  -config Conf... See more...
Thank you, that was really helpful but I still have on last small issue: so what the spl does, if values are missing it adds true to the last ones not the correct ones : Example :  -config Conf-console -ntpsync no --check_core no shoul give : Flag : config Value: Conf Flag : console Value: true Flag : ntpsync Value: no Flag : check_core Value: no but it extract all the values and then add true to the last values so it gives this which is incorrect: Flag : config Value: Conf Flag : console Value: no Flag : ntpsync Value: no Flag : check_core Value: true so I think if you can show me a way to move those without values to the last so they will be matched with the true values, it would be really helpful
If SNOW will interpret a newline character as dividing the field into two lines then this may work. | rex field=summary mode=sed "s/from '(.*)/from\n'\1/"
That returns client and client_ip just fine Seems like splunk cares which index is mentioned furst?
It's best to put sample data and configs in code blocks so the formatting is retained. You say you tried an inline regex in props.   Would you please share that?  The transform shared looks like it ... See more...
It's best to put sample data and configs in code blocks so the formatting is retained. You say you tried an inline regex in props.   Would you please share that?  The transform shared looks like it should have worked.  Please confirm the sourcetype in inputs.conf matches that in props.conf. You have a source that works - how does that working config differ from the non-working one?  Please share the complete stanzas so we can spot any differences.
Hello, Is there anyway that I can put the data from all cloudwatch log group under certain paths , I see that we shouldn't use wildcards. As I have many log groups in certain path and in future th... See more...
Hello, Is there anyway that I can put the data from all cloudwatch log group under certain paths , I see that we shouldn't use wildcards. As I have many log groups in certain path and in future the log group will also be added. It would be an operation over head for team to keep adding when there is new log group created. Eg: /aws/lambda/* , /aws/*  Can you guide me here. I am using splunk add on AWS by pull based mechanism.
Hi All, I have a field called summary in my search - Failed backup of the transaction log for SQL Server database 'model' from 'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'. I am creating this search... See more...
Hi All, I have a field called summary in my search - Failed backup of the transaction log for SQL Server database 'model' from 'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'. I am creating this search for service-now alert and I am sending this summary field value under comments in ServiceNow. I need it to break in two lines like this - Line1-Failed backup of the transaction log for SQL Server database 'model' from Line2-'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'.   How do I implement this in my Search? Thanks In Advance!
Hi! How did you get the http status code on BRUM? Regards,
Hi @scelikok I missed mentioning that Private key can be one of the below format: format 1: PrivateKey : abc.def.ghi.jkl   format 2: PrivateKey :    Meaning it can be empty as well as in for... See more...
Hi @scelikok I missed mentioning that Private key can be one of the below format: format 1: PrivateKey : abc.def.ghi.jkl   format 2: PrivateKey :    Meaning it can be empty as well as in format 1
Also, you have to restart the Splunk system that's on in order for it to take effect. 
I am trying to extract some field values that comes in the following format <date>00-mon-year</date> <DisplayName>example</DisplayName> <Hostname>example</Hostname>   When using the spl re... See more...
I am trying to extract some field values that comes in the following format <date>00-mon-year</date> <DisplayName>example</DisplayName> <Hostname>example</Hostname>   When using the spl rex function i am able to extract the fields using  rex field=_raw "Hostname>(?<hostname>.*)<".   Then i have tried to use both a inline regex function in props with the same regex (without quotes) and it does not work. I have also used a transforms.conf with the stanza as follows [hostname] FORMAT = Hostname::$1 REGEX = Hostname>(?<hostname>(.*?))</Hostname   then in the props REPORT -Hostname = hostname   and this does not work.   However, i have another source that pulls the same type of logs and i am able to use inline regex in spl format just fine with no issues. This issue is only specific to this source which i have as [source::/opt/*]   Any ideas on this fix?
Yes.  IF you have a search head cluster (shc), AND you are trying to edit the config on one of the members instead of on the deployer, THEN that's exactly the message I expect you to get.  It *might... See more...
Yes.  IF you have a search head cluster (shc), AND you are trying to edit the config on one of the members instead of on the deployer, THEN that's exactly the message I expect you to get.  It *might* be possible to get that if you simply don't have some permission or another that's required, but I think those messages are different ones. So - Do you have a search head cluster? If you don't know, then ask your Splunk folks and/or have them manage this config for you. If you are the Splunk person and don't know what I'm saying (and you built it) then you don't have a SHC and we'll have to look into other things.   (Also, please be careful as to *which* "reply" button you click, so we can keep the threads going correctly instead of being willy-nilly all over the place!)
Splunk CrowdStrike Dashboard Also, I think Splunk is using event.DetectID along with search action = allowed and event.DetectID along with search action = blocked but I don't know where these fie... See more...
Splunk CrowdStrike Dashboard Also, I think Splunk is using event.DetectID along with search action = allowed and event.DetectID along with search action = blocked but I don't know where these fields connect to on the CrowdStrike side. Here's an example of what I saw on the Splunk side:  index=security "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* | search action=allowed | stats dc(event.DetectId) as Detections | lookup index=security sourcetype=CrowdStrike:Event:Streams:JSON "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* | search action=blocked | stats dc(event.DetectId) as Detections