All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@richgalloway   It work! That 'trace0' show out of condition data like > 20. Thank you for your help!
@gcusello  I think you missed my point - in your example you are using the CSV to test, not the lookup definition, so the test is not the same as the DM. Your test should use the lookup definition to... See more...
@gcusello  I think you missed my point - in your example you are using the CSV to test, not the lookup definition, so the test is not the same as the DM. Your test should use the lookup definition to make sure it also works.
The appendcols command runs after the main search, but it's true the subsearch has no awareness of fields outside the subsearch.  Thanks for pointing that out, @bowesmana 
I see you are already using the Plotly boxplot charts. I can't confirm, but it seems it may be something inside the Plotly code itself - that viz is made by Splunk Works, but I'm not sure who looks ... See more...
I see you are already using the Plotly boxplot charts. I can't confirm, but it seems it may be something inside the Plotly code itself - that viz is made by Splunk Works, but I'm not sure who looks after that. Can you check in your browser develop tools to see if there is anything reported in the Console log?  
More of just an additional screenshot for context and a fieldname with a description of the eval that was done to it... but I see that is has caused some confusion. Here is full search of the loca... See more...
More of just an additional screenshot for context and a fieldname with a description of the eval that was done to it... but I see that is has caused some confusion. Here is full search of the local emulation. | makeresults | eval employee_data="[{'company':'company A','name': 'employee A1','position': None}, {'company': 'company A','name': 'employee A2','position': None}]" | append [ | makeresults | eval employee_data="[{'company':'company B','name': 'employee B1','position': None}, {'company': 'company B','name': 'employee B2','position': None}]" ] | append [ | makeresults | eval employee_data="[{'company':'company C','name': 'employee C1','position': None}, {'company': 'company C','name': 'employee C2','position': None}]" ] | eval formatted_data=replace(replace(employee_data, "\'", "\""), "None", "\"None\"") | spath input=formatted_data
Hello, Which part of your search that put the into a new field "formatted_data"? I don't see "formatted_data" in the search. Can you paste the whole search including how you put simulated data? ... See more...
Hello, Which part of your search that put the into a new field "formatted_data"? I don't see "formatted_data" in the search. Can you paste the whole search including how you put simulated data? Thank you for your help. Thanks
Maybe something like this?     <base_search> | eval employee_data=replace(replace(employee_data, "\'", "\""), "None", "\"None\"") | spath input=employee_data     Testing o... See more...
Maybe something like this?     <base_search> | eval employee_data=replace(replace(employee_data, "\'", "\""), "None", "\"None\"") | spath input=employee_data     Testing on my local instance looks like it worked out.        
I have a question regarding how to properly extract the time ranges between the Events to use as a field value for a Date-Range column. Im setting up the Chargeback app, and im making a specific repo... See more...
I have a question regarding how to properly extract the time ranges between the Events to use as a field value for a Date-Range column. Im setting up the Chargeback app, and im making a specific report. Currently, Im tracking the total ingestion by the Biz_Unit. The main splunk query does fine, but there's a lot of time manipulation within the search, and im not sure how to properly set the date I need. here is an example of some of the output.    This is the query, i know its a large query, but this outputs all of the fields used in chargeback.    `chargeback_summary_index` source=chargeback_internal_ingestion_tracker idx IN (*) st IN (*) idx="*" earliest=-7d@d latest=now | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time, index_name, st | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time, index_name, st `chargeback_comment(" | `chargeback_data_2_bunit(index,index_name,index_name)` ")` | `chargeback_index_enrichment_priority_order` | `chargeback_get_entitlement(ingest)` | fillnull value=100 perc_ownership | eval shared_idx = if(perc_ownership="100", "No", "Yes") | eval ingestion_idx_st_GB = ingestion_idx_st_GB * perc_ownership / 100 , ingest_unit_cost = ingest_yearly_cost / ingest_entitlement / 365 | fillnull value="Undefined" biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email | fillnull value=0 ingest_unit_cost, ingest_yearly_cost, ingest_entitlement | stats Latest(License) As License Latest(ingest_unit_cost) As ingest_unit_cost Latest(ingest_yearly_cost) As ingest_yearly_cost Latest(ingest_entitlement) As ingest_entitlement_GB Latest(shared_idx) As shared_idx Latest(ingestion_idx_st_GB) As ingestion_idx_st_GB Latest(perc_ownership) As perc_ownership Latest(biz_desc) As biz_desc Latest(biz_owner) As biz_owner Latest(biz_email) As biz_email Values(biz_division) As biz_division by _time, biz_unit, biz_dep, index_name, st | eventstats Sum(ingestion_idx_st_GB) As ingestion_idx_GB by _time, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_dep_GB by _time, biz_unit, biz_dep, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_GB by _time, biz_unit, index_name | eval ingestion_idx_st_TB = ingestion_idx_st_GB / 1024 , ingestion_idx_st_PB = ingestion_idx_st_TB / 1024 ,ingestion_idx_TB = ingestion_idx_GB / 1024 , ingestion_idx_PB = ingestion_idx_TB / 1024 , ingestion_bunit_dep_TB = ingestion_bunit_dep_GB / 1024 , ingestion_bunit_dep_PB = ingestion_bunit_dep_TB / 1024, ingestion_bunit_TB = ingestion_idx_GB / 1024 , ingestion_bunit_PB = ingestion_bunit_TB / 1024 | eval ingestion_bunit_dep_cost = ingestion_bunit_dep_GB * ingest_unit_cost, ingestion_bunit_cost = ingestion_bunit_GB * ingest_unit_cost, ingestion_idx_st_cost = ingestion_idx_st_GB * ingest_unit_cost | eval ingest_entitlement_TB = ingest_entitlement_GB / 1024, ingest_entitlement_PB = ingest_entitlement_TB / 1024 | eval Time_Period = strftime(_time, "%a %b %d %Y") | search biz_unit IN ("*") biz_dep IN ("*") shared_idx=* _time IN (*) biz_owner IN ("*") biz_desc IN ("*") biz_unit IN ("*") | table biz_unit biz_dep Time_Period index_name st perc_ownership ingestion_idx_GB ingestion_idx_st_GB ingestion_bunit_dep_GB ingestion_bunit_GB ingestion_bunit_dep_cost ingestion_bunit_cost biz_desc biz_owner biz_email | sort 0 - ingestion_idx_GB | rename st As Sourcetype ingestion_bunit_dep_cost as "Cost B-Unit/Dep", ingestion_bunit_cost As "Cost B-Unit", biz_unit As B-Unit, biz_dep As Department, index_name As Index, perc_ownership As "% Ownership", ingestion_idx_st_GB AS "Ingestion Sourcetype GB", ingestion_idx_GB As "Ingestion Index GB", ingestion_bunit_dep_GB As "Ingestion B-Unit/Dep GB",ingestion_bunit_GB As "Ingestion B-Unit GB", biz_desc As "Business Description", biz_owner As "Business Owner", biz_email As "Business Email" | fieldformat Cost B-Unit/Dep = printf("%'.2f USD",'Cost B-Unit/Dep') | fieldformat Cost B-Unit = printf("%'.2f USD",'Cost B-Unit') | search Index = testing | dedup Time_Period | table B-Unit Time_Period "Ingestion B-Unit GB"   The above image shows what im trying to extract. The query has binned _time twice:   | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time, index_name, st | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time, index_name, st   Ive asked our GPT equivalent bot how to properly do it, and it mentioned that when im sorting the stats by _time and index, it was overwriting the time variable. it also kept recommending me change and eval time down near the bottom of the query, something like:   | stats sum(Ingestion_Index_GB) as Ingestion_Index_GB sum("Ingestion B-Unit GB") as "Ingestion B-Unit GB" sum("Cost B-Unit") as "Cost B-Unit" earliest(_time) as early_time latest(_time) as late_time by B-Unit | eval Date_Range = strftime(early_time, "%Y-%m-%d %H:%M:%S") . " - " . strftime(late_time, "%Y-%m-%d %H:%M:%S") | table Date_Range B-Unit Ingestion_Index_GB "Ingestion B-Unit GB" "Cost B-Unit"     Other instances it said that it wasnt in string format, so i couldnt use the strftime.    overall, im now confused as to what is happening to the _time value. All i want is to get the earliest and latest value by index and set that as Date_Range. Can someone help me with this and possibly explain what is happening to the _time variable as it keeps getting manipulated and sorted by.    This is the search query found in the chargeback app under the storage tab. Its the "Daily Ingestion By Index, B-Unit & Department" search query.  if anyone has any ideas, any help would be much appreciated. 
@PickleRick  One thing, no 2 really, or actually 3, or maybe 4 1. Item 1 2. I thought of this half way though my reply 3. I am using an editor 4. I can correct mistakes 5. Or can I  
Hello, | dbxquery connection=test query="select employee_data from company" The following employee_data is not in proper JSON format, so I can't use spath. How do I replace single quote (') with d... See more...
Hello, | dbxquery connection=test query="select employee_data from company" The following employee_data is not in proper JSON format, so I can't use spath. How do I replace single quote (') with double quote ("), replace None with "None" and put it on a new field? Thank you for your help. employee_data [{company':'company A','name': 'employee A1','position': None}, {company': 'company A','name': 'employee A2','position': None}] [{company':'company B','name': 'employee B1','position': None}, {company': 'company B','name': 'employee B2','position': None}] [{company':'company C','name': 'employee C1','position': None}, {company': 'company C','name': 'employee C2','position': None}]  
If you have little knowledge, then an easy way to learn it to deconstruct the search by first running the first line, understanding what you see then adding line 2, run again, then line 3, again and ... See more...
If you have little knowledge, then an easy way to learn it to deconstruct the search by first running the first line, understanding what you see then adding line 2, run again, then line 3, again and so on. When you have only raw data results returned, you can get a better visual output of what you have by just adding the following at the end of each search as you run it | table _time * which will give you rows of data being returned, so run  source=WinEventLog:Security EventCode=4624 OR (EventCode=4776 Keywords="Audit Success") | table _time * and you will see a ton of columns source=WinEventLog:Security EventCode=4624 OR (EventCode=4776 Keywords="Audit Success") | eval Account = mvindex(Account_Name, 1) | table _time * You will see a new column called Account which was derived from the second value of Account_Name. Keep going and figure out what each line does to your data. Use the search reference for each command to work out the command https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Eval    
and for clarity is that limit 10k and 60s per this tech link? https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchTutorial/Useasubsearch (i think i remember chatting with you before and the ac... See more...
and for clarity is that limit 10k and 60s per this tech link? https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchTutorial/Useasubsearch (i think i remember chatting with you before and the actual limit is 50k?) i just tested a search and got back 20,878 rows so I think it's more than 10k (Splunk v9.06)  
You cannot pass things into the subsearch in Splunk. Subsearches run before the outer search, so the appendcols subsearch has no knowledge of Critical. Maybe you can share your saved search and more... See more...
You cannot pass things into the subsearch in Splunk. Subsearches run before the outer search, so the appendcols subsearch has no knowledge of Critical. Maybe you can share your saved search and more detail of the primary search, as there is probably a way to craft it that can work - it looks like you're using the saved search as some kind of lookup.
Hello, I have a question about how to pull custom method data collector values and add them to custom metrics which can be used in dashboard widgets on app dynamics. I have configured the data colle... See more...
Hello, I have a question about how to pull custom method data collector values and add them to custom metrics which can be used in dashboard widgets on app dynamics. I have configured the data collectors to pull the values from a given endpoint and have validated the values are being pulled from snapshots, however when I navigate to the analytics tab and search for the custom method data it is not present. I have double checked that transaction analytics is enabled for this application's business transaction in question, and the data collector is shown in the transaction analytics - manual data collectors section of analytics. The only issue is getting these custom method data collectors to populate in the Custom Method Data section of the search tab of analytics so that I can create custom metrics on this data. Any help is much appreciated!
Been receiving this error from my UF. extremely frustrating since splunk doesn't offer any support unless your paying them. -did then system daemon reload  - enable/disable boot-start - reviewed s... See more...
Been receiving this error from my UF. extremely frustrating since splunk doesn't offer any support unless your paying them. -did then system daemon reload  - enable/disable boot-start - reviewed splunkd.log -Somtimes it would say splunk.pid doesnt exist. -What the hell is going on here, failures for both  Ubuntu and AWS Splunk FW: Receiving the following error: "failed to start splunk.service: unit splunk.service not found" SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: error (Reason: Unit SplunkForwarder.service failed to load properly, please adjust/correct and reload service manager: Device or resource busy) Active: failed (Result: signal) since Wed 2024-01-17 20:04:18 UTC; 13s ago Duration: 1min 48.199s Main PID: 14888 (code=killed, signal=KILL) CPU: 2.337s  
Is there a way to export the content management list to excel? I want to go over them with my team and it would be faster to have the full list of objects to determine what we want to enable.
I know this is a couple month's old at this point, but there's not built-in support within SOAR to run your PowerShell Script. However, you can use the WinRM app to connect to a proxy server of sorts... See more...
I know this is a couple month's old at this point, but there's not built-in support within SOAR to run your PowerShell Script. However, you can use the WinRM app to connect to a proxy server of sorts, and then run your powershell script on that machine.
@gcusello Is there any other technique that work with this condition?
@PickleRick So what is your suggestion for this?
This post is a few years old but to aid those who have spent hours trying to find the answers and end up here for help.... In my case I saw the "killed by signal 9" because of the proxy configuration... See more...
This post is a few years old but to aid those who have spent hours trying to find the answers and end up here for help.... In my case I saw the "killed by signal 9" because of the proxy configuration...  I had 2 different cases of 'signal9'.  The solution for one case was the application itself allowed the proxy to be set in the app settings and in the other case this was the solution Can you configure the Duo Splunk Connector to use ... - Splunk Community.