All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you getting IHF’s internal logs into SCP? Or any other logs via this IHF?
Hi, I am using multiple case conditions but the condition is not matching. In the third line of the code used AND condition for message=*End of GL* AND tracepoint=*Exception* .If the condition match... See more...
Hi, I am using multiple case conditions but the condition is not matching. In the third line of the code used AND condition for message=*End of GL* AND tracepoint=*Exception* .If the condition match make to success.In my case its showing both SUCCESS and ERROR in the table.     | eval Status=case( like('Status' ,"%SUCCESS%") ,"SUCCESS", like('message' ,"%End of GL-import flow%") AND like('tracePoint',"%EXCEPTION%") ,"SUCCESS", like('tracePoint',"%EXCEPTION%") AND like('priority' ,"%ERROR%"),"ERROR", like('Status',"%ERROR%"),"ERROR", like('priority',"%WARN%"),"WARN", like('priority',"GLImport Job Already Running, Please wait for the job to complete%"),"WARN", like('message',"%End of GL Import process - No files found for import to ISG%"), "ERROR", 1==1, "")      
Hello @gcusello  Sorry.. I tried again your last suggestion, but num and num2 still have type as "Number" I expect num2 has "String" type after using   num2= tostring(num,"commas")   Please sugg... See more...
Hello @gcusello  Sorry.. I tried again your last suggestion, but num and num2 still have type as "Number" I expect num2 has "String" type after using   num2= tostring(num,"commas")   Please suggest   Thanks again..  
That makes sense.  Thank you for replying.  Do you have an example splunk_metadata.csv file?  The Splunk documentation mentions separating items by vendor/type, but they do not mention where to find ... See more...
That makes sense.  Thank you for replying.  Do you have an example splunk_metadata.csv file?  The Splunk documentation mentions separating items by vendor/type, but they do not mention where to find those.   
I have made this work. Do not fully remember my thought process, but here is what I have: For those who want to just look at the code: `chargeback_summary_index` source=chargeback_internal_ingestio... See more...
I have made this work. Do not fully remember my thought process, but here is what I have: For those who want to just look at the code: `chargeback_summary_index` source=chargeback_internal_ingestion_tracker idx IN (*) st IN (*) idx="*" earliest=-30d@d latest=now | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time index_name | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time index_name `chargeback_comment(" | `chargeback_data_2_bunit(index,index_name,index_name)` ")` | `chargeback_index_enrichment_priority_order` | `chargeback_get_entitlement(ingest)` | fillnull value=100 perc_ownership | eval shared_idx = if(perc_ownership="100", "No", "Yes") | eval ingestion_idx_st_GB = ingestion_idx_st_GB * perc_ownership / 100 , ingest_unit_cost = ingest_yearly_cost / ingest_entitlement / 365 | fillnull value="Undefined" biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email | fillnull value=0 ingest_unit_cost, ingest_yearly_cost, ingest_entitlement | stats Latest(License) As License Latest(ingest_unit_cost) As ingest_unit_cost Latest(ingest_yearly_cost) As ingest_yearly_cost Latest(ingest_entitlement) As ingest_entitlement_GB Latest(shared_idx) As shared_idx Latest(ingestion_idx_st_GB) As ingestion_idx_st_GB Latest(perc_ownership) As perc_ownership Latest(biz_desc) As biz_desc Latest(biz_owner) As biz_owner Latest(biz_email) As biz_email Values(biz_division) As biz_division by _time, biz_unit, biz_dep, index_name, | eventstats Sum(ingestion_idx_st_GB) As ingestion_idx_GB by _time, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_dep_GB by _time, biz_unit, biz_dep, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_GB by _time, biz_unit, index_name | eval ingestion_idx_st_TB = ingestion_idx_st_GB / 1024 | eval ingestion_idx_TB = ingestion_idx_GB / 1024 | eval ingestion_bunit_dep_TB = ingestion_bunit_dep_GB / 1024 | eval ingestion_bunit_TB = ingestion_idx_GB / 1024 | eval ingestion_bunit_dep_cost = ingestion_bunit_dep_GB * ingest_unit_cost | eval ingestion_bunit_cost = ingestion_bunit_GB * ingest_unit_cost | eval Time_Period = strftime(_time, "%a %b %d %Y") | search biz_unit IN ("*") biz_dep IN ("*") shared_idx=* _time IN (*) biz_owner IN ("*") biz_desc IN ("*") biz_unit IN ("*") | table Time_Period biz_unit biz_dep Time_Period index_name st perc_ownership ingestion_idx_GB ingestion_idx_st_GB ingestion_bunit_dep_GB ingestion_bunit_GB ingestion_bunit_dep_cost ingestion_bunit_cost biz_desc biz_owner biz_email | sort 0 - ingestion_idx_GB | rename st As Sourcetype ingestion_bunit_dep_cost as "Cost B-Unit/Dep", ingestion_bunit_cost As "Cost B-Unit", biz_unit As B-Unit, biz_dep As Department, index_name As Index, perc_ownership As "% Ownership", ingestion_idx_st_GB AS "Ingestion Sourcetype GB", ingestion_idx_GB As "Ingestion_Index_GB", ingestion_bunit_dep_GB As "Ingestion B-Unit/Dep GB", ingestion_bunit_GB As "Ingestion B-Unit GB", Time_Period as Date_Range | eval Date_Range_timestamp = strptime(Date_Range, "%a %b %d %Y") | stats sum("Ingestion B-Unit GB") as Total_Ingestion_by_BUnit_GB sum("Cost B-Unit") as Total_BUnit_Cost values(Date_Range) as Date_Range min(Date_Range_timestamp) as Earliest_Date max(Date_Range_timestamp) as Latest_Date by B-Unit | eval Total_Ingestion_by_BUnit_GB = round(Total_Ingestion_by_BUnit_GB, 4) | eval Total_BUnit_Cost = round(Total_BUnit_Cost, 3) | eval Earliest_Date = strftime(Earliest_Date, "%a %b %d %Y") | eval Latest_Date = strftime(Latest_Date, "%a %b %d %Y") | eval Date_Range = Earliest_Date . " - " . Latest_Date | fieldformat Total_BUnit_Cost = printf("%'.2f USD",'Total_BUnit_Cost') | table Date_Range B-Unit Total_Ingestion_by_BUnit_GB Total_BUnit_Cost   I believe I kept bringing in _time every step of the way. with each stats. I make the time_period with: | eval Time_Period = strftime(_time, "%a %b %d %Y")   And then i do most of the manipulation here: | eval Date_Range_timestamp = strptime(Date_Range, "%a %b %d %Y") | stats sum("Ingestion B-Unit GB") as Total_Ingestion_by_BUnit_GB sum("Cost B-Unit") as Total_BUnit_Cost values(Date_Range) as Date_Range min(Date_Range_timestamp) as Earliest_Date max(Date_Range_timestamp) as Latest_Date by B-Unit | eval Total_Ingestion_by_BUnit_GB = round(Total_Ingestion_by_BUnit_GB, 4) | eval Total_BUnit_Cost = round(Total_BUnit_Cost, 3) | eval Earliest_Date = strftime(Earliest_Date, "%a %b %d %Y") | eval Latest_Date = strftime(Latest_Date, "%a %b %d %Y") | eval Date_Range = Earliest_Date . " - " . Latest_Date | fieldformat Total_BUnit_Cost = printf("%'.2f USD",'Total_BUnit_Cost') | table Date_Range B-Unit Total_Ingestion_by_BUnit_GB Total_BUnit_Cost     Its been a while, i forgot what my thought process was, but here's the code and it may help. 
Hoping someone can help as I'm relatively new to Splunk On-Call administration.  When our system sends an alert to multiple Splunk On-Call email addresses to contact and use multiple routing keys, th... See more...
Hoping someone can help as I'm relatively new to Splunk On-Call administration.  When our system sends an alert to multiple Splunk On-Call email addresses to contact and use multiple routing keys, the system only uses the first routing key in the list of recipients and drops everything else.  For example, if I sent an email to 00000000+RoutingKey1@alert.victorops.com; 00000000+RoutingKey2@alert.victorops.com Splunk On-Call will create an alert for RoutingKey1 but no alerts are created for RoutingKey2. Is there an Alert Rule syntax that will extract these so it creates alerts for both? Thanks.
Hi @Hassaan.Javaid, Did you get a chance to check out that linked post? 
Ciao a tutti, dato che il nostro splunk non è collegato in rete, volevo sapere se era possibile usare vt4splunk in modalità offline
Linux, RHEL 8.9. Splunk 9.2.0.1   Had a forwarder manager running (for years) with 2,000+ clients connecting. Did the upgrade from 9.1 to 9.2.0.1 and now have "No clients phoned home."   No... See more...
Linux, RHEL 8.9. Splunk 9.2.0.1   Had a forwarder manager running (for years) with 2,000+ clients connecting. Did the upgrade from 9.1 to 9.2.0.1 and now have "No clients phoned home."   No firewall or selinux issues are noted.   Getting gazillions of: 03-21-2024 09:59:59.050 -0500 WARN AutoLoadBalancedConnectionStrategy [8459 TcpOutEloop] - Current dest host connection 10.14.8.107:9997, oneTimeClient=0, _events.size()=20, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Thu Mar 21 09:59:45 2024 is using 18446604244100536835 bytes. Total tcpout queue size is 512000. Warningcount=301   Funny thing is, that's the only "error" (warning) I have. it otherwise looks like it's seeing clients:   03-21-2024 09:59:15.468 -0500 INFO PubSubSvr [842449 TcpChannelThread] - Subscribed: channel=tenantService/handshake/reply/carmenw2pc/A265FEF1-4A37-4D58-90ED-AD1142694F05 connectionId=connection_10.14.72.83_8089_blah.domain.edu_blah_A265FEF1-4A37-4D58-90ED-AD1142694F05 listener=0x7f2c78d44000
Hi @Osama.Abbas, I'm still waiting to hear back from the Docs team. Have you found a solution you could share in the meantime?
Hi, I am working on prototype on the splunk dashboards, where having 30 + panels. The dashboard panels is basically between upstream and downstream data/volume comparison.  Client would like to se... See more...
Hi, I am working on prototype on the splunk dashboards, where having 30 + panels. The dashboard panels is basically between upstream and downstream data/volume comparison.  Client would like to see the arrow marks or any line between the panels as to show connects. please could you share the XML source reference? Thanks, Selvam.    
hi @LearningGuy, as I said, you have to say what kind of string you want: | makeresults | eval num = 1 | eval var_type = typeof(num) | eval num2 = tostring(num,"commas") | eval var_type2 = typeof(... See more...
hi @LearningGuy, as I said, you have to say what kind of string you want: | makeresults | eval num = 1 | eval var_type = typeof(num) | eval num2 = tostring(num,"commas") | eval var_type2 = typeof(num2) Ciao. Giuseppe
@gcusello  Yes but why the first one is also string? The first one is number. Should I remove " " from typeof?        Thanks | makeresults | eval num = 1 | eval var_type = typeof(num) | eval num... See more...
@gcusello  Yes but why the first one is also string? The first one is number. Should I remove " " from typeof?        Thanks | makeresults | eval num = 1 | eval var_type = typeof(num) | eval num2 = tostring("num") | eval var_type2 = typeof(num2)   Thanks
did you ever get resolution on this? My deployment server stopped servicing clients -- start throwing this error. No firewall or selinux issues as suggested below...
Hi @LearningGuy, the second it's a string, you transformed it using the tostring function, infact you have the commas. Ciao. Giuseppe
They both became String.     Num should be number.  Thanks  
Hello, I solved it installing again the credentials package of universal forwarder.   But now, it is connected but I am not recieving data. can you help me troubleshoot a splunk deployment w... See more...
Hello, I solved it installing again the credentials package of universal forwarder.   But now, it is connected but I am not recieving data. can you help me troubleshoot a splunk deployment where I am sending high stick events to a heavy forwarder and the heavy has to forward them to the splunk cloud. These are the .conf files inputs.conf [udp://1514] sourcetype = pan:firewall no_appending_timestamp = true index = mx_paloalto disabled = 0 [splunktcp://9997] disabled = 0 outputs.conf [tcpout] defaultGroup = splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1 useACK = true [tcpout:splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1] server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com :9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9 997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com: 9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997 compressed = false clientCert = $SPLUNK_HOME/etc/apps/100_tenant_splunkcloud/default/tenant_server.pem sslCommonNameToCheck = *.tenant.splunkcloud.com sslVerifyServerCert = true sslVerifyServerName = true useClientSSLCompression = true autoLBFrequency = 120 [tcpout:scs] disabled=1 server = tenant.forwarders.scs.splunk.com:9997 compressed = true clientCert = $SPLUNK_HOME/etc/apps/100_tenant_splunkcloud/default/tenant_server.pem sslAltNameToCheck = *.forwarders.scs.splunk.com sslVerifyServerCert = true useClientSSLCompression = false autoLBFrequency = 120 server.conf [general] serverName = hvyfwd pass4SymmKey = $7$7+sDZpk4U5p8+jEvGlsFjca8/McSNMoOO/O4HIN+nkKs0FoDGr5s6Q== [sslConfig] sslPassword = $7$FMfYp/ZEJtp12iajMolR3PORwlFOl4WgEuJSfl2YIjfBn7Dw7t/ILg== [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free [license] active_group = Forwarder and this is the output of the tcpdump: [root@hvyfwd local]# tcpdump -i any udp port 1514 dropped privs to tcpdump tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes 11:26:45.136626 IP static-confidential_ip.47441 > hvyfwd.fujitsu-dtcns: UDP, length 652 11:26:45.136752 IP static-confidential_ip.47441 > hvyfwd.fujitsu-dtcns: UDP, length 658 11:26:45.136771 IP static-confidential_ip.35720 > hvyfwd.fujitsu-dtcns: UDP, length 661 11:26:45.136796 IP static-confidential_ip.35720 > hvyfwd.fujitsu-dtcns: UDP, length 752 11:26:45.136861 IP static-confidential_ip.47441 > hvyfwd.fujitsu-dtcns: UDP, length 715
Hi @LearningGuy, what does it happen running: | makeresults | eval num = 1000 | eval var_type = typeof("num") | eval num2 = tostring(num, "commas") | eval var_type2 = typeof("num2") Ciao. Giuseppe
Hello @gcusello , I tried and the same result..  see below..  thank you  
Hi @LearningGuy , see at https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConversionFunctions and try | makeresults | eval num = 1 | eval var_type = typeof('num') | eval num2 = to... See more...
Hi @LearningGuy , see at https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConversionFunctions and try | makeresults | eval num = 1 | eval var_type = typeof('num') | eval num2 = tostring(num, "commas") | eval var_type2 = typeof('num2') Ciao. Giuseppe