All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Apologies, as this is a bit lengthy, but I'm completely stuck. I'm having to show data that shows a compliance percentage after adding weighted values to each result over the past 4 months Basically... See more...
Apologies, as this is a bit lengthy, but I'm completely stuck. I'm having to show data that shows a compliance percentage after adding weighted values to each result over the past 4 months Basically, an item can have two states (pass or fail), and a score attributed based on severity. If an item is high severity then it is worth 72 points, if it's a medium then 36 points, and a low is worth 12. I then calculate the weighted percentage of each machine using those numbers mentioned above (72/36/12).  From there I have to find one more value called Actual Percentage...this means you must have a weighted percentage of over 90% AND you must have 0 high severity fails (this results in basically a binary 1/0 type result...1 if you meet the requirements, and 0 if you do not meet both requirements). I was able to get this working for a singular time (such as 30 days) as shown below, but I'm unable to make this work with a timechart if I want to see the "Actual Percentage" of all the machines over the past 4 months broken down by each month (1 mon). Basically it's -> determine values for each compliance item -> calculate percentage to gain weighted percentage -> calculate the actual percentage of the machine by looking for weighted percentage above 90% and no high failed checks -> Produce output of the environment over the past 4 months spanning each month. Thank you for any ideas to get me off this syntax block.   search query | dedup comp_id check_id | stats count(eval(compliance_result="passed" OR compliance_result="excepted_passed" OR compliance_result="excepted_failed")) AS Passed, count(eval(compliance_result="failed")) AS Failed, count(eval(source_severity="high" AND (compliance_result="passed" OR compliance_result="excepted_passed" OR compliance_result="excepted_failed"))) AS HPassed, count(eval(source_severity="high" AND compliance_result="failed")) AS HFailed, count(eval(source_severity="medium" AND (compliance_result="passed" OR compliance_result="excepted_passed" OR compliance_result="excepted_failed"))) AS MPassed, count(eval(source_severity="medium" AND compliance_result="failed")) AS MFailed, count(eval(source_severity="low" AND (compliance_result="passed" OR compliance_result="excepted_passed" OR compliance_result="excepted_failed"))) AS LPassed, count(eval(source_severity="low" AND compliance_result="failed")) AS LFailed by comp_id | eval High_Failed=HFailed | eval WP_High=HPassed*72, WP_Med=MPassed*36, WP_Low=LPassed*12, WF_High=HFailed*72, WF_Med=MFailed*36, WF_Low=LFailed*12 | eval Weighted_Passed=WP_High+WP_Med+WP_Low, Weighted_Failed=WF_High+WF_Med+WF_Low | eval WC_Perc=(100-((Weighted_Failed/(Weighted_Passed+Weighted_Failed))*100)) | eval WC_Perc=round(WC_Perc,1) | eval Weighted_Comp_Passed=if((HFailed="0" AND WC_Perc>90), "1", "0") | eval Weighted_Comp_Failed=if((HFailed!="0" OR WC_Perc<90), "1", "0") | eval goodMachines= if(WC_Perc>90 AND HFailed=0, 1, 0) | stats sum(goodMachines) as sumOfGoodMachines, count(goodMachines) as countOfMachines | eval percentGoodMachines = ((sumOfGoodMachines / countOfMachines)*100) | eval percentGoodMachines=round(percentGoodMachines,2) | eval percentGoodMachines=percentGoodMachines."%" | rename percentGoodMachines AS "Actual Compliance", | table checklist "Actual Compliance"    
Hello, I have some log events that are structured like that: <timestamp> - [INFO] <serialnumber 1> <timestamp> - [INFO] <errocode> <timestamp> - [INFO] <errocode> <timestamp> - [INFO] <serialnum... See more...
Hello, I have some log events that are structured like that: <timestamp> - [INFO] <serialnumber 1> <timestamp> - [INFO] <errocode> <timestamp> - [INFO] <errocode> <timestamp> - [INFO] <serialnumber 2> <timestamp> - [INFO] <errocode> How can I group the error_code events with the corresponding serialnumber event, meaning the previous event? Thank you
I'm trying to figure out where the app content is served from in a distributed deployment. For example, is the entirety of each app stored on every server in the deployment, or is it served from a si... See more...
I'm trying to figure out where the app content is served from in a distributed deployment. For example, is the entirety of each app stored on every server in the deployment, or is it served from a single location?  Specifically I have two questions - if you want to edit app dashboard or JavaScript files using a text editor, rather than the the GUI, can you do that in a distributed environment? If so where would you do that?
Hi All, I am using the Splunk trial version.  After I installed Splunk Add-on for Microsoft Cloud Services, I got the message below: Unable to initialize modular input "mscs_storage_table" defi... See more...
Hi All, I am using the Splunk trial version.  After I installed Splunk Add-on for Microsoft Cloud Services, I got the message below: Unable to initialize modular input "mscs_storage_table" defined in the app "Splunk_TA_microsoft-cloudservices": Introspecting scheme=mscs_storage_table: script running failed (exited with code 1).. Can anyone help me in this case to identify the issue and how to fix the same? Thanks, Ramesh J 
BLUF:  (on Windows Server 2019) install mscs Version 4.0.0.  You can add/edit the Configuration and Inputs. v4.0.1 won't let you. If you install mscs 4.0.1, it hangs at "loading" and eventually cras... See more...
BLUF:  (on Windows Server 2019) install mscs Version 4.0.0.  You can add/edit the Configuration and Inputs. v4.0.1 won't let you. If you install mscs 4.0.1, it hangs at "loading" and eventually crashes Splunk. Not sure what happens if you update v4.0.0 to v4.0.1 after configuring it - not willing to find out at this point Does not matter if  installed before or after MS Azure Add-on/App for Splunk - still hangs at :loading".
What permissions are required to be able to save a Slideshow?  I am able to save a new slideshow as an Admin in Splunk but when a user goes through the same process it does not save it (it gives them... See more...
What permissions are required to be able to save a Slideshow?  I am able to save a new slideshow as an Admin in Splunk but when a user goes through the same process it does not save it (it gives them the options to create the slideshow and do the Save function but doesn't do anything and gives no error). Thanks.
I have the result below in a table, but for some technical reasons I need to check these values ​​in a bar chart, but as the field is a string containing date and time I am not able to perform the co... See more...
I have the result below in a table, but for some technical reasons I need to check these values ​​in a bar chart, but as the field is a string containing date and time I am not able to perform the conversion   index=teste "EnviaSMS" | table _time, enviaSMS, GravaDB, VerificaTotal | rename VerificaTotalSec As "Verifica - TotalEmSegundos" | sort -_time     Application Log:   2020-07-21T12:49:40.168 Dbg 09900 [000c02f527958102] [Tempos] - EnviaSMS:0h0m0s0ms - GravaDB:0h0m0s16ms - VerificaTotal:0h0m0s172ms 2020-07-21T12:49:40.136 Dbg 09900 [000c02f527957636] [Tempos] - EnviaSMS:0h0m0s0ms - GravaDB:0h0m0s0ms - VerificaTotal:0h0m0s155ms 2020-07-21T12:49:40.136 Dbg 09900 [000c02f527957636] [Tempos] - EnviaSMS:0h0m0s0ms - GravaDB:0h0m0s0ms - VerificaTotal:0h0m0s160ms     How could I calculate the last 60 minutes on a line graph displaying the information in the 03 columns?
I have a query that I'm trying to get the amount of time a transaction takes to execute. I was selecting only a piece of the transaction but it does have a transaction id in the data. This query ret... See more...
I have a query that I'm trying to get the amount of time a transaction takes to execute. I was selecting only a piece of the transaction but it does have a transaction id in the data. This query returns results, "process start" OR "process end". Then i use Rex to grab the transaction id which also returns results, | rex field=_raw "Correlation Id: \[(?<correlation_id>.*)\] " Next step add transaction to this so it correlates the whole transaction ... this is where it fails the transaction addition doesn't return any results. I'm sure something is wrong with my query.  | transaction correlation_id _raw startswith="process start" endswith="process end" Can you use transaction on a generated Rex result? Thanks in advance.
I am using below query  index=aws earliest=-12h eventName=AuthorizeSecurityGroupIngress "items{}.cidrIp"="0.0.0.0/0" AND "items{}.port"!=8080 AND "items{}.port"!=22 AND "items{}.port"!=80 AND "items... See more...
I am using below query  index=aws earliest=-12h eventName=AuthorizeSecurityGroupIngress "items{}.cidrIp"="0.0.0.0/0" AND "items{}.port"!=8080 AND "items{}.port"!=22 AND "items{}.port"!=80 AND "items{}.port"!=443  to get data like if a user is making any changes and using the above-mentioned ports then it is not going to show the data and it is giving me the exact results but when I have a log that has nested ports then it is not giving me the actual result. eg items: []     {port: 12345}     {port: 22} It means if we have a nested log and it contains the above-mentioned ports then it should show the result. Any thoughts on this?
I want to create geo map using 'geostats' command and am extracting and assigning country, latitude and longitude using eval command. Source log file: Application_Name Transaction_Name Status ... See more...
I want to create geo map using 'geostats' command and am extracting and assigning country, latitude and longitude using eval command. Source log file: Application_Name Transaction_Name Status NP_DEV_AU Homepage 0 NP_DEV_VN Homepage 0 NP_DEV_AU Login 1 NP_DEV_HK Homepage 1 NP_DEV_ID Login 0 NP_DEV_ID Homepage 0 NP_DEV_VN Login 0 NP_DEV_HK Login 1   Below is my query. index=foo host=ab03u source=*sm.log sourcetype=bol_logs | rex field=Application_Name "NP_DEV_(?<Market>\w+.*)" | rex field=Application_Name "NP_DEV_(?<Market>\w+.*)" | eval Country=case(Market=="HK","Hong Kong", Market=="TH","Thailand", Market=="VN","Vietnam", Market=="AU","Australia", Market=="ID","Indonesia", Market=="PH","Philippines", Market=="MY","Malaysia", Market=="SG","Singapore", Market=="CN","China") | chart values(Status) by Country,Transaction_Name | eval Result=case((Homepage!=Login), "Failure", (Homepage==0 AND Login==0), "Success", (Homepage==1 AND Login==1), "Failure") | eval latitude=case(Country=="Hong Kong","22.396428", Country=="Thailand","15.870032", Country=="Vietnam","14.058324", Country=="Australia","-25.274398", Country=="Indonesia","-0.789275", Country=="Philippines","12.879721", Country=="Malaysia","4.210484", Country=="Singapore","1.352083", Country=="China","35.86166") | eval longitude=case(Country=="Hong Kong","114.109497", Country=="Thailand","100.992541", Country=="Vietnam","108.277199", Country=="Australia","133.775136", Country=="Indonesia","113.921327", Country=="Philippines","121.774017", Country=="Malaysia","101.975766", Country=="Singapore","103.819836", Country=="China","104.195397") | geostats latfield=latitude longfield=longitude values(Result) by Country So, my final output will be, Country Result Australia Failure Vietnam Success Hong Kong Failure Indonesia Success From this table i want to create one geo map using 'geostats' command, output should be in bubbles. and whichever country having 'Failure' as the Result, it should shows in red color bubble & success should be in green color bubble. When i used above query, it's not showing any result for cluster map. @skakehi_splunk @rnowitzki @richgalloway @woodcock @somesoni2 @niketn ..Need all your expertise on this.
Hi, I have this input setup in Splunk_TA_snow in the local folder. When I first configured this input it went successfully in the test index below.  I got the records from the associated servicenow ... See more...
Hi, I have this input setup in Splunk_TA_snow in the local folder. When I first configured this input it went successfully in the test index below.  I got the records from the associated servicenow table. Now, when i change to prod index and restart splunk the TA writes this to the log for sys_user_group: 2020-07-21 14:00:48,988 INFO pid=14877 tid=Thread-1 file=snow_data_loader.py:_do_collect:151 | start https://serviceflo.servicenowservices.com/api/now/table/sys_user_group?sysparm_display_value=all&sysparm_limit=4000&sysparm_exclude_reference_link=true&sysparm_query=sys_updated_on>=2020-07-20+15:13:56^ORDERBYsys_updated_on I'm not getting any records which is ok, but is looking for any record in the ServiceNow greater than 2020-07-20. I need to back populate this table into prod index but the TA does NOT go back to the since_when time below. Any ideas to get this data? Inputs.conf [snow://sys_user_group] since_when = 2000-01-01 00:00:00 disabled = 0 duration = 300 id_field = sys_id index = servicenow_test timefield = sys_updated_on   Thx, brdr  
We are planning to ingest Oracle standard auditing and FGA logs (both stored in Oracle DB tables) via DBConnect into Splunk. Does anyone here know if Splunk updates the DBA_AUDIT_MGMT_LAST_ARCH_TS va... See more...
We are planning to ingest Oracle standard auditing and FGA logs (both stored in Oracle DB tables) via DBConnect into Splunk. Does anyone here know if Splunk updates the DBA_AUDIT_MGMT_LAST_ARCH_TS value for the audit trails after it collects the data? This value is a timestamp tells the source Oracle database that the audit data has been collected by the external tool, and allows the DBA's to know that the audit logs have been collected from the database. In turn, this let's the database purge jobs execute and delete the audit data from the database since it has already been collected by Splunk.
Dear All, I'm testing Splunk Phantom using the Community Edition to evaluate this product that seems great. Configuring the Talos Intelligence version 1.0.1 application for Splunk Phantom (publis... See more...
Dear All, I'm testing Splunk Phantom using the Community Edition to evaluate this product that seems great. Configuring the Talos Intelligence version 1.0.1 application for Splunk Phantom (published by Splunk), it seems that the App is not working anymore since Talos moved to CloudFlare protection. Testing connectivity is returning an error about enabling cookie support. Please find the connectivity test result below: App 'Talos Intelligence' started successfully (id: 1595335404831) on asset: 'talos intelligence'(id: 14) Loaded action execution configuration Doing test IP lookup... Test Connectivity Failed. 1 action failed Status Code: 403. Data from server: Attention Required! | Cloudflare Please enable cookies. Sorry, you have been blocked You are unable to access talosintelligence.com Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. What can I do to resolve this? You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page. Cloudflare Ray ID: 5b6501eb1889b769 Your IP: X.X.X.X Performance & security by Cloudflare Select a Language English Español   Anyone experiencing the same issue? Any application update planned to solve this?   Kind regards, Louis
We have filtered out a large amount of firewall logs on heavy forwarder due to which we are receiving warning "WARN TailReader - Could not send data to output queue (parsingQueue), retrying...". Coul... See more...
We have filtered out a large amount of firewall logs on heavy forwarder due to which we are receiving warning "WARN TailReader - Could not send data to output queue (parsingQueue), retrying...". Could anyone please help if we want to whitelist the data instead of blacklisting, so that this issue co
Hi, I have a sample data as following, for multiple date, separate space stats. {"object":"DOC1","date":"2020-06-13","write":7321.445,"delete":6717.463,"objCntW":5148955} I am writing the followin... See more...
Hi, I have a sample data as following, for multiple date, separate space stats. {"object":"DOC1","date":"2020-06-13","write":7321.445,"delete":6717.463,"objCntW":5148955} I am writing the following to find sum of space per object, i am getting incorrect values. index=avs_os host=dc1prftseix01 sourcetype=stat | spath output=archobj "{}.object"| spath output=date "{}.date"| spath output=space "{}.write"                                | table archobj,date,space | addtotals | stats sum(space) as Space by archobj If i am trying to find the total sum and not using the group by ( | stats sum(space) as Space), i am able to get correct values. Please help
Hi, What is the role of HEADER_MODE in props.conf? I am seeing the documents, but I don't understant. https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Propsconf Could anyone please let me ... See more...
Hi, What is the role of HEADER_MODE in props.conf? I am seeing the documents, but I don't understant. https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Propsconf Could anyone please let me know what it is with specific example? Thanks in advance.  
i practiced basic commands in splunk like copy, move commands. i created a folder called downloads, in that i created three folders, like splunk, provoke, srikanth. i moved srikanth into provoke, now... See more...
i practiced basic commands in splunk like copy, move commands. i created a folder called downloads, in that i created three folders, like splunk, provoke, srikanth. i moved srikanth into provoke, now the structure is like downloads-provoke-srikanth.  i need commands for copying and moving srikanth to downloads/ 
i created virtual machines in AWS CONSOLE, and connected to a gitbash. i tried to add search peer , for that i searched distserver keys in search head, to copy in idexer. but i dont find distservrkey... See more...
i created virtual machines in AWS CONSOLE, and connected to a gitbash. i tried to add search peer , for that i searched distserver keys in search head, to copy in idexer. but i dont find distservrkeys folder under auth folder.
Hi Splunkers!  This is just an example from splunk.  And it is a just similar visualization what i want to. Except a point.  It shows already seperately the time in second "Time" column. So i... See more...
Hi Splunkers!  This is just an example from splunk.  And it is a just similar visualization what i want to. Except a point.  It shows already seperately the time in second "Time" column. So i want to exclude the time in Third "Event" Column, part with red pen. And if it is possible, i want to show only part with blue pens in Event column. Is it available?   (I don't want to with statistic table because i want to keep the first (i) column which shows all information with clicking.      Can anyone help me? Thanks in advance. 
How do I extract a string of numbers using Rex to work the AVG out from a string to a number As it is showing as blank. The number format 00:00:00.0000000.   thanks Joe