All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a query that works, but the output calculates a percentage column in a chart.  I need to show the total of TAM and the correct percentage value for all the returned rows.  I'm using this: | ... See more...
I have a query that works, but the output calculates a percentage column in a chart.  I need to show the total of TAM and the correct percentage value for all the returned rows.  I'm using this: | inputlookup Patch-Status_Summary_AllBU_v3.csv | stats count(ip_address) as total, sum(comptag) as compliant_count by BU | eval patchcompliance=round((compliant_count/total)*100,1) | fields BU total compliant_count patchcompliance | rename BU as Domain, total as TAM, patchcompliance as "% Compliance" | appendpipe [stats sum(TAM) as TAM sum(compliant_count) as compliant_count | eval totpercent=round((comp/TAM)*100,1)] | eval TAM = tostring(TAM, "commas")   The output is: Domain TAM compliant_count % Compliance BU1 1,180 1146 97.1 BU2 2,489 2420 97.2 BU3 409,881 96653 23.6 BU4 3 3 100.0 BU5 1,404 1375 97.9 BU6 119,003 90100 75.7 BU7 33,506 30669 91.5 BU8 2,862 1997 69.8 BU9 239,897 216401 90.2 BU10 3,945 3832 97.1 BU11 569 482 84.7   814,739 445078     If I add to the appendpipe stats command avg("% Compliance") as "% Compliance" then it will not take add up the correct percentage which in this case is "54.6" but the average would display "87.1". How do I calculate the correct percentage as a total using the totals of columns TA
right now i have a cron expression like this - 0 * * * * so the report is sent out every hour. How can i generate the report only once when the condition is triggered.   Thanks! 
New to the community  I searched for this message "Unable to fetch defaults: Unable to fetch authorize defaults." but couldn't find anything relevant. Has anyone seen this message before? A... See more...
New to the community  I searched for this message "Unable to fetch defaults: Unable to fetch authorize defaults." but couldn't find anything relevant. Has anyone seen this message before? Any idea how to resolve it?
Hi Team, [host::1.(xx|xx).xx.xx(x|y)] TRANSFORMS-change_index_abc_secure = change_index_abc_secure   [change_index_abc_secure] SOURCE_KEY = MetaData:Index REGEX = os, os_secure DEST_KEY = M... See more...
Hi Team, [host::1.(xx|xx).xx.xx(x|y)] TRANSFORMS-change_index_abc_secure = change_index_abc_secure   [change_index_abc_secure] SOURCE_KEY = MetaData:Index REGEX = os, os_secure DEST_KEY = MetaData:Index FORMAT = index::abc_secure   I need to route the logs from certain host to index=abc_secure (not all the logs only os and os_secure logs)
Is it possible to build an app that contains a pre-configured `inputs.conf` in order to have (administrator defined) modular inputs emit events to a 'static' HEC input that is created (via the `input... See more...
Is it possible to build an app that contains a pre-configured `inputs.conf` in order to have (administrator defined) modular inputs emit events to a 'static' HEC input that is created (via the `inputs.conf` file) when an administrator installs the splunk app?
is there an option to update the value of a specific field within a specific artifact? I was able to update using phantom update_artifact action or with a REST call, but when the field is updated it ... See more...
is there an option to update the value of a specific field within a specific artifact? I was able to update using phantom update_artifact action or with a REST call, but when the field is updated it also delete the other existent fields in that artifact.
Hello, In the events, the severity is captured as values between 1 to 10. I want to represent them as High, Low, Medium etc. For example, if the severity is between 1 and 3  as Low if the severi... See more...
Hello, In the events, the severity is captured as values between 1 to 10. I want to represent them as High, Low, Medium etc. For example, if the severity is between 1 and 3  as Low if the severity is between 4 and 5  as Medium, and so on Please advise on how to achieve this. Thanks in advance.      
I have the following scenario. An object transitions through multiple queues , I want to query the time spent in Queue 1 and group it by object type. Each object has unique id but it generates an eve... See more...
I have the following scenario. An object transitions through multiple queues , I want to query the time spent in Queue 1 and group it by object type. Each object has unique id but it generates an event every time it transitions from queues. : Event 1:  id : 123 type : type1 status : IN_QUEUE_1 duration : 100 Event 1:  id : 123 type : type1 status : IN_QUEUE_2 duration : 150   Type         average_time_in_queue1 type1          50 type2           .... type3           ...  
Activity Result: {"IsProductValidated":"false","ErrorCodes":[{"errorCode":"PRD-202","errorMessage":"Product Validation Service Returned Error :: Reason: Options you have selected are not available at... See more...
Activity Result: {"IsProductValidated":"false","ErrorCodes":[{"errorCode":"PRD-202","errorMessage":"Product Validation Service Returned Error :: Reason: Options you have selected are not available at this time. Please change your selections."}]}
I'm trying to make a test with data that I onboarded with the collect command. What I see is that when I insert an event that is exactly the same as existing events, the data that I insert is not a... See more...
I'm trying to make a test with data that I onboarded with the collect command. What I see is that when I insert an event that is exactly the same as existing events, the data that I insert is not able to be searched on the fields while it is possible with the data that is normally onboarded. My guess is that the transforms.conf configuration is not running on collect events, but I can not figure out how i can make sure it happens.   How can I force a transforms.conf to also run on data onboarded with collect?
I created a workflow action of off some netflow logs.  I want to pass the source IP from the netflow and pass it to another search what looks at authentication logs from another log source to see the... See more...
I created a workflow action of off some netflow logs.  I want to pass the source IP from the netflow and pass it to another search what looks at authentication logs from another log source to see the user that most recently authenticated PRIOR to the event that I am triggering the workflow from.  I can pass _time to the new search as latest=$_time$  but I cannot seem to set earliest to what I want (in this case 4 hours before the passed $_time$ variable.  How I can I properly set earliest to 4 hours before $_time$ so the workflow search looks back 4 hours from the event I am pivoting off of?
while setting alert action to webhook and giving URL details, getting error logs like these.  URL format : http://<IP>:<PORT>/alert ERROR sendmodalert [46453 AlertNotifierWorker-0] - action=webho... See more...
while setting alert action to webhook and giving URL details, getting error logs like these.  URL format : http://<IP>:<PORT>/alert ERROR sendmodalert [46453 AlertNotifierWorker-0] - action=webhook STDERR - Error sending webhook request: HTTP Error 401: action=webhook - Alert action script completed in duration=84 ms with exit code=2 sendmodalert [19216 AlertNotifierWorker-0] - action=webhook - Alert action script returned error code=2 do anyone else faced similar issue while setting up webhooks.  Response would be appreciated. Thanks !
Is it possible to use different index names for each server, I would like to send the same logs from Heavy Forwarder to two servers (Splunk Enterprise, Splunk Cloud). The logs to Splunk Cloud will ... See more...
Is it possible to use different index names for each server, I would like to send the same logs from Heavy Forwarder to two servers (Splunk Enterprise, Splunk Cloud). The logs to Splunk Cloud will be sent using the Credentials App. Will this below configuration perform the thing ...   Is there any corrections / other ways to perform this...    
Hi   I have a search  index=main sourcetype=data2 type=policy that gives me the following in json: customerId: man0000 dns: false ioc: true type: policy I have a csv which has ... See more...
Hi   I have a search  index=main sourcetype=data2 type=policy that gives me the following in json: customerId: man0000 dns: false ioc: true type: policy I have a csv which has the following (the purpose of the csv is to show what the default settings should be across all customers) Config Item, Config setting DNS, Enabled IOC, Disabled   We also have a list of customers in a database with the customerId's   So my search logic was as follows:   Search the index to bring all the different search results as a table rename the search results so instead of dns have DNS and instead of ioc have IOC etc | join customer ID [| dbxquery query=.....] - to get cus id's |Inputlookup the csv file (here is where i get stuck) I don't know how to link them together so that for every customerid from the DB that matches the customerID in the search to compare the results from search i.e where ioc: true and on csv is Disabled, to output the results.   Any help would be appreciated. Thanks in advance
can someone tell me what am I doing wrong in this xml? <dashboard> <label>test Veracode</label> <row> <panel> <title>Severity by flaw</title> <chart> <search> <... See more...
can someone tell me what am I doing wrong in this xml? <dashboard> <label>test Veracode</label> <row> <panel> <title>Severity by flaw</title> <chart> <search> <query>index="veracode_test" sourcetype="Veracode_scan" | lookup Veracode.csv findings{}.severity | stats count by Severity | append [| inputlookup Veracode.csv | fields Severity | stats count by Severity | eval count = 0] | stats max(count) as Total by Severity | eval sorter = case(Severity="Very High", 5, Severity="High", 4, Severity="medium", 3, Severity="Low",2, Severity="Very Low",1,1==1,99) | sort + sorter | fields - sorter</query> <earliest>0</earliest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"Very High":#e60000,"High":ff0000,"meidum":#ff8000, "Low":#ffbf00,"Very Low":#ffff00 }</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel>
I need to add multiple values from a CSV to a main Search I have, I used the lookup command but I think that will just compare one field from the main search and the CSV and I need to add more fields... See more...
I need to add multiple values from a CSV to a main Search I have, I used the lookup command but I think that will just compare one field from the main search and the CSV and I need to add more fields from the CSV to do some evals, Please help!
Hi, I have been tasked to design an alert to trigger whenever there is a modification of the "search query" of an alert. To achieve this, I have decided to use the following approach: 1.compute th... See more...
Hi, I have been tasked to design an alert to trigger whenever there is a modification of the "search query" of an alert. To achieve this, I have decided to use the following approach: 1.compute the hash value of the search    2.create a lookup table (say, search_hash.csv)   3.and then compute the hash of the search (say every 24hr)     4.and now compare the computed hash against the already existing hash in the lookuptable     5.and if there is a difference, then REPLACE the value in the original lookup file search_hash.csv. with the dynamically computed value.  I have been able to reach step 4, but stuck at STEP 5. Please can some help me how I can achieve the last step of DYNAMICALLY REPLACING VALUES OF A LOOKUP WITH SEARCH RESULTS.?
I am looking to convert this regular search: index=foo action=blocked `macro` src_zone=foo | timechart count span=1d over to a search that leverage tstats and the Network Traffic datamodel that sh... See more...
I am looking to convert this regular search: index=foo action=blocked `macro` src_zone=foo | timechart count span=1d over to a search that leverage tstats and the Network Traffic datamodel that shows the count of blocked traffic per day for the past 7 days due to the large volume of network events | tstats count AS "Count of Blocked Traffic" from datamodel=Network_Traffic where (nodename = All_Traffic.Traffic_By_Action.Blocked_Traffic) All_Traffic.src_zone=foo groupby _time, All_Traffic.src_zone prestats=true  How can I get this search to use timechart? Thx
Hi All,  i am not able to see the logs in Splunk from one source  and one host Usecase: i have 2 host, host a and host b , source=/app/opt/source/logs/sample.log, i can see the data in host a but... See more...
Hi All,  i am not able to see the logs in Splunk from one source  and one host Usecase: i have 2 host, host a and host b , source=/app/opt/source/logs/sample.log, i can see the data in host a but i cannot see the data in host b  below is the inputs used: [monitor:///app/opt/source/logs/sample.log] sourcetype=app:sample:log disabled=0 index=xxxx blacklist= \.(.?:tar |gz)$
Hello @All, I am using Splunk add-on for MS Cloud service to create a new EventHub. I would like to ask how to make sure I have set it up properly. We have an old one which is up and running. The... See more...
Hello @All, I am using Splunk add-on for MS Cloud service to create a new EventHub. I would like to ask how to make sure I have set it up properly. We have an old one which is up and running. The idea is to create a new namespace and run those in parallel to make sure all works before we shut down the old one. The New input was created successfully. However, still cannot see any new source. Where am I going wrong? Do I need to manually create an entry within a config file? Thank you all!