All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could try using a html panel is some text in using the html codes for arrows, ← → etc
Yeah I tried that first, but other related issues on the boards led me to try it with the backslash. It returns the same results.
You don't need the backslash - here is a runanywhere example showing it working | makeresults | fields - _time | eval ThisField=split("01-g01-0 01-g02-0 01-g03-0"," ") | mvexpand ThisField | rex fie... See more...
You don't need the backslash - here is a runanywhere example showing it working | makeresults | fields - _time | eval ThisField=split("01-g01-0 01-g02-0 01-g03-0"," ") | mvexpand ThisField | rex field=ThisField mode=sed "s/g0/GRN/g"
Hi you should remove \ before G. r. Ismo
Stats combined the unique correlation ID.
Hello world, I'm trying to use rex to rename the part of the strings below where it says "g0" to "GRN". So the output would read 01-GRN1-0, 01-GRN2-0etc. I have been unable to get it to work and any... See more...
Hello world, I'm trying to use rex to rename the part of the strings below where it says "g0" to "GRN". So the output would read 01-GRN1-0, 01-GRN2-0etc. I have been unable to get it to work and any guidance to point me in the right direction would be much appreciated. The rex statement in question: | rex field=ThisField mode=sed "s/g0/\GRN/g" Example strings: 01-g01-0 01-g02-0 01-g03-0
Based on your SPL and screenshot it seems to be a MV field. Some yours stats have combined it from several correlationId or what ever you have after by on stats.
Hi Probably you missed :port part from your input? Without : it doesn’t parse that input correctly. You could see e.g. https://community.splunk.com/t5/Getting-Data-In/udp-portnumber-Event-Blacklist-... See more...
Hi Probably you missed :port part from your input? Without : it doesn’t parse that input correctly. You could see e.g. https://community.splunk.com/t5/Getting-Data-In/udp-portnumber-Event-Blacklist-How-do-I-prevent-unwanted-data/m-p/613039 You have typo on transforms.conf name on your examples, but probably it’s correct on your HF? And you have restarted it after modify those configurations? r. Ismo
Darn. Nope. All those conditions check out OK in my environment. New indexes are where they should be, it's a stand-alone deployment manager, etc..
No, I haven't, thanks!  Missed this in the release notes... Will let you know how it works out.
The condition is not working for me  like('message' ,"%End of GL-import flow%") AND like('tracePoint',"EXCEPTION") ,"SUCCESS", If the message value=End of GL-import flow and tracepoint values=Excep... See more...
The condition is not working for me  like('message' ,"%End of GL-import flow%") AND like('tracePoint',"EXCEPTION") ,"SUCCESS", If the message value=End of GL-import flow and tracepoint values=Exception then it should be SUCCESS.Screen shot attached below index="mulesoft" applicationName="p-oracle-finance-ext" environment=DEV (*End of GL-import flow*) OR (tracePoint="EXCEPTION") OR (priority="WARN" AND message="GLImport Job Already Running, Please wait for the job to complete*") OR ( message="End of GL Import process - No files found for import to ISG") | rename content.File.fstatus as Status | eval Status=case( like('Status' ,"SUCCESS") ,"SUCCESS", like('message' ,"%End of GL-import flow%") AND like('tracePoint',"EXCEPTION") ,"SUCCESS", like('tracePoint',"EXCEPTION") AND like('priority' ,"%ERROR%"),"ERROR", like('Status',"ERROR"),"ERROR", like('priority',"WARN"),"WARN", like('priority',"GLImport Job Already Running, Please wait for the job to complete%"),"WARN", like('message',"%End of GL Import process - No files found for import to ISG%"), "ERROR", 1==1, "") | stats values(content.File.fid) as "TransferBatch/OnDemand" values(content.File.fname) as "BatchName/FileName" values(content.File.fprocess_message) as ProcessMsg values(Status) as Status values(content.File.isg_file_batch_id) as OracleBatchID values(content.File.total_rec_count) as "Total Record Count" values(message) as message values(timestamp) as timestamp values(content.errorType) as errorType by correlationId | eval ProcessMsg= coalesce(ProcessMsg,errorType,message) | eventstats min(timestamp) AS Start_Time, max(timestamp) AS End_Time by correlationId | eval StartTime=round(strptime(Start_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(End_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") | table Status Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId | join correlationId type=left [ search index="mulesoft" applicationName="p-oracle-finance-ext" environment=DEV (message="API: START: /v1/revpro-to-oracle/onDemand*") OR (message="API: START: /v1/fin_Zuora_GL_Revpro_JournalImport") OR (message="API: START: /v1/revproGLImport/onDemand*") | eval JobType=case( like('message',"API: START: /v1/revproGLImport/onDemand%"),"OnDemand", like('message',"API: START: /v1/revpro-to-oracle/onDemand%"),"OnDemand", like('message',"API: START: /v1/fin_Zuora_GL_Revpro_JournalImport"),"Scheduled") | table JobType correlationId ] | table Status JobType Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId | fields - ElapsedTimeInSecs | where JobType!=" "  
Hi Have you read this https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers ? r. Ismo
Hi when your field names are not contained any special characters, it’s safer and easier to left ‘ away. Basically those conditions seems to be ok. Can you give some samples which are not working?... See more...
Hi when your field names are not contained any special characters, it’s safer and easier to left ‘ away. Basically those conditions seems to be ok. Can you give some samples which are not working? r. Ismo
Are you getting IHF’s internal logs into SCP? Or any other logs via this IHF?
Hi, I am using multiple case conditions but the condition is not matching. In the third line of the code used AND condition for message=*End of GL* AND tracepoint=*Exception* .If the condition match... See more...
Hi, I am using multiple case conditions but the condition is not matching. In the third line of the code used AND condition for message=*End of GL* AND tracepoint=*Exception* .If the condition match make to success.In my case its showing both SUCCESS and ERROR in the table.     | eval Status=case( like('Status' ,"%SUCCESS%") ,"SUCCESS", like('message' ,"%End of GL-import flow%") AND like('tracePoint',"%EXCEPTION%") ,"SUCCESS", like('tracePoint',"%EXCEPTION%") AND like('priority' ,"%ERROR%"),"ERROR", like('Status',"%ERROR%"),"ERROR", like('priority',"%WARN%"),"WARN", like('priority',"GLImport Job Already Running, Please wait for the job to complete%"),"WARN", like('message',"%End of GL Import process - No files found for import to ISG%"), "ERROR", 1==1, "")      
Hello @gcusello  Sorry.. I tried again your last suggestion, but num and num2 still have type as "Number" I expect num2 has "String" type after using   num2= tostring(num,"commas")   Please sugg... See more...
Hello @gcusello  Sorry.. I tried again your last suggestion, but num and num2 still have type as "Number" I expect num2 has "String" type after using   num2= tostring(num,"commas")   Please suggest   Thanks again..  
That makes sense.  Thank you for replying.  Do you have an example splunk_metadata.csv file?  The Splunk documentation mentions separating items by vendor/type, but they do not mention where to find ... See more...
That makes sense.  Thank you for replying.  Do you have an example splunk_metadata.csv file?  The Splunk documentation mentions separating items by vendor/type, but they do not mention where to find those.   
I have made this work. Do not fully remember my thought process, but here is what I have: For those who want to just look at the code: `chargeback_summary_index` source=chargeback_internal_ingestio... See more...
I have made this work. Do not fully remember my thought process, but here is what I have: For those who want to just look at the code: `chargeback_summary_index` source=chargeback_internal_ingestion_tracker idx IN (*) st IN (*) idx="*" earliest=-30d@d latest=now | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time index_name | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time index_name `chargeback_comment(" | `chargeback_data_2_bunit(index,index_name,index_name)` ")` | `chargeback_index_enrichment_priority_order` | `chargeback_get_entitlement(ingest)` | fillnull value=100 perc_ownership | eval shared_idx = if(perc_ownership="100", "No", "Yes") | eval ingestion_idx_st_GB = ingestion_idx_st_GB * perc_ownership / 100 , ingest_unit_cost = ingest_yearly_cost / ingest_entitlement / 365 | fillnull value="Undefined" biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email | fillnull value=0 ingest_unit_cost, ingest_yearly_cost, ingest_entitlement | stats Latest(License) As License Latest(ingest_unit_cost) As ingest_unit_cost Latest(ingest_yearly_cost) As ingest_yearly_cost Latest(ingest_entitlement) As ingest_entitlement_GB Latest(shared_idx) As shared_idx Latest(ingestion_idx_st_GB) As ingestion_idx_st_GB Latest(perc_ownership) As perc_ownership Latest(biz_desc) As biz_desc Latest(biz_owner) As biz_owner Latest(biz_email) As biz_email Values(biz_division) As biz_division by _time, biz_unit, biz_dep, index_name, | eventstats Sum(ingestion_idx_st_GB) As ingestion_idx_GB by _time, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_dep_GB by _time, biz_unit, biz_dep, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_GB by _time, biz_unit, index_name | eval ingestion_idx_st_TB = ingestion_idx_st_GB / 1024 | eval ingestion_idx_TB = ingestion_idx_GB / 1024 | eval ingestion_bunit_dep_TB = ingestion_bunit_dep_GB / 1024 | eval ingestion_bunit_TB = ingestion_idx_GB / 1024 | eval ingestion_bunit_dep_cost = ingestion_bunit_dep_GB * ingest_unit_cost | eval ingestion_bunit_cost = ingestion_bunit_GB * ingest_unit_cost | eval Time_Period = strftime(_time, "%a %b %d %Y") | search biz_unit IN ("*") biz_dep IN ("*") shared_idx=* _time IN (*) biz_owner IN ("*") biz_desc IN ("*") biz_unit IN ("*") | table Time_Period biz_unit biz_dep Time_Period index_name st perc_ownership ingestion_idx_GB ingestion_idx_st_GB ingestion_bunit_dep_GB ingestion_bunit_GB ingestion_bunit_dep_cost ingestion_bunit_cost biz_desc biz_owner biz_email | sort 0 - ingestion_idx_GB | rename st As Sourcetype ingestion_bunit_dep_cost as "Cost B-Unit/Dep", ingestion_bunit_cost As "Cost B-Unit", biz_unit As B-Unit, biz_dep As Department, index_name As Index, perc_ownership As "% Ownership", ingestion_idx_st_GB AS "Ingestion Sourcetype GB", ingestion_idx_GB As "Ingestion_Index_GB", ingestion_bunit_dep_GB As "Ingestion B-Unit/Dep GB", ingestion_bunit_GB As "Ingestion B-Unit GB", Time_Period as Date_Range | eval Date_Range_timestamp = strptime(Date_Range, "%a %b %d %Y") | stats sum("Ingestion B-Unit GB") as Total_Ingestion_by_BUnit_GB sum("Cost B-Unit") as Total_BUnit_Cost values(Date_Range) as Date_Range min(Date_Range_timestamp) as Earliest_Date max(Date_Range_timestamp) as Latest_Date by B-Unit | eval Total_Ingestion_by_BUnit_GB = round(Total_Ingestion_by_BUnit_GB, 4) | eval Total_BUnit_Cost = round(Total_BUnit_Cost, 3) | eval Earliest_Date = strftime(Earliest_Date, "%a %b %d %Y") | eval Latest_Date = strftime(Latest_Date, "%a %b %d %Y") | eval Date_Range = Earliest_Date . " - " . Latest_Date | fieldformat Total_BUnit_Cost = printf("%'.2f USD",'Total_BUnit_Cost') | table Date_Range B-Unit Total_Ingestion_by_BUnit_GB Total_BUnit_Cost   I believe I kept bringing in _time every step of the way. with each stats. I make the time_period with: | eval Time_Period = strftime(_time, "%a %b %d %Y")   And then i do most of the manipulation here: | eval Date_Range_timestamp = strptime(Date_Range, "%a %b %d %Y") | stats sum("Ingestion B-Unit GB") as Total_Ingestion_by_BUnit_GB sum("Cost B-Unit") as Total_BUnit_Cost values(Date_Range) as Date_Range min(Date_Range_timestamp) as Earliest_Date max(Date_Range_timestamp) as Latest_Date by B-Unit | eval Total_Ingestion_by_BUnit_GB = round(Total_Ingestion_by_BUnit_GB, 4) | eval Total_BUnit_Cost = round(Total_BUnit_Cost, 3) | eval Earliest_Date = strftime(Earliest_Date, "%a %b %d %Y") | eval Latest_Date = strftime(Latest_Date, "%a %b %d %Y") | eval Date_Range = Earliest_Date . " - " . Latest_Date | fieldformat Total_BUnit_Cost = printf("%'.2f USD",'Total_BUnit_Cost') | table Date_Range B-Unit Total_Ingestion_by_BUnit_GB Total_BUnit_Cost     Its been a while, i forgot what my thought process was, but here's the code and it may help. 
Hoping someone can help as I'm relatively new to Splunk On-Call administration.  When our system sends an alert to multiple Splunk On-Call email addresses to contact and use multiple routing keys, th... See more...
Hoping someone can help as I'm relatively new to Splunk On-Call administration.  When our system sends an alert to multiple Splunk On-Call email addresses to contact and use multiple routing keys, the system only uses the first routing key in the list of recipients and drops everything else.  For example, if I sent an email to 00000000+RoutingKey1@alert.victorops.com; 00000000+RoutingKey2@alert.victorops.com Splunk On-Call will create an alert for RoutingKey1 but no alerts are created for RoutingKey2. Is there an Alert Rule syntax that will extract these so it creates alerts for both? Thanks.
Hi @Hassaan.Javaid, Did you get a chance to check out that linked post?