All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a script for Linux that executes "sar -n DEV" and formats the output to look like: Linux <kernel version> (<hostname>) <date> <arch> (<#> CPU) Average: <interface> <field1> ... See more...
I have a script for Linux that executes "sar -n DEV" and formats the output to look like: Linux <kernel version> (<hostname>) <date> <arch> (<#> CPU) Average: <interface> <field1> <field2> <field3> Average: <interface> <field1> <field2> <field3> Average: <interface> <field1> <field2> <field3> Using Splunk Web's field extractor, I have a regex that applies field extraction to the first "Average:" line. How do I make it so the field is applied to as many "Average:" lines exist?
We've added a self signed cert to our haproxy server which passes traffic on to our search head cluster. After doing so, I changed the following on web.conf [settings] tools.proxy.on = true to... See more...
We've added a self signed cert to our haproxy server which passes traffic on to our search head cluster. After doing so, I changed the following on web.conf [settings] tools.proxy.on = true tools.proxy.base = https://internalhost.local Now when trying to visit https://internalhost.local it properly maintains https and redirects to https://internalhost.local/en-US, which then redirects to https://127.0.0.1:8000. I can't seem to find any configuration value in my web or server settings that calls out 127.0.01 which puts me at a loss for what to adjust.
I am trying to create a timechart for a query that returns a count for a set of products that where it's lifecycle status is either compliant or our of compliance. the count is then used to create a ... See more...
I am trying to create a timechart for a query that returns a count for a set of products that where it's lifecycle status is either compliant or our of compliance. the count is then used to create a percentage. The query returns the two counts (which i don't care about) and the associated percentage for each (which is what do want to get into a time chart for the past 90 days. I have the search working but have not been able to figure out how to get the percents (two lines, one chart) into a timechart. Below is my search any help would be appreciated. What i have so far does return a count of events but nothing in a chart and the search itself says no results found index="index" sourcetype=productversion (( removablemedia_win OSType="Windows*" AND LifeCycleStatus!="NewVersion") OR NOT ProductName="") | stats count(LifeCycleStatus) AS lifecycletotal | join type=outer OSType [search index="index" sourcetype=productversion (NOT ProductName="" (OSType="Windows*")) OR ( removablemedia_win AND (OSType="Windows*") AND (LifeCycleStatus="Mainstream" OR LifeCycleStatus="Emerging")) | stats count(LifeCycleStatus) AS IsCompliant] | eval Compliant=(IsCompliant/lifecycletotal)*100 | eval Compliant=round(Compliant,2) | eval NonCompliant=(100-Compliant) | eval NonCompliant=round(NonCompliant,2) | timechart span=1d first(Compliant) as Compliant first(NonCompliant) as NonCompliant
Why does a sub search return a boolean value? I am expecting to see the department value. index="activedirectory" (userPrincipalName=*@emailaddress.ca) | eval From_Sub_Search=tostring([search in... See more...
Why does a sub search return a boolean value? I am expecting to see the department value. index="activedirectory" (userPrincipalName=*@emailaddress.ca) | eval From_Sub_Search=tostring([search index="activedirectory" (userPrincipalName="*@emailaddress.ca") | return department]) | eval From_Department=tostring(department) | table From_Sub_Search, From_Department Search shown below:
Attempt A index="w3c" | rex field=_raw "?(sessionid=?)\w{8}-\w{4}-\w{4}-\w{4}-\w{12}" | table ABC _raw Attempt B index="w3c" | rex field=_raw "\.\sessionid\=\"(?P)[\w{8}]-[\w{4}]-[\w{4}]-[\w{4}]-[... See more...
Attempt A index="w3c" | rex field=_raw "?(sessionid=?)\w{8}-\w{4}-\w{4}-\w{4}-\w{12}" | table ABC _raw Attempt B index="w3c" | rex field=_raw "\.\sessionid\=\"(?P)[\w{8}]-[\w{4}]-[\w{4}]-[\w{4}]-[\w{12}]" | table ABC _raw Attempt C index="w3c" | rex field=_raw "\.\sessionid\=\"(?P[\w{8}]-[\w{4}]-[\w{4}]-[\w{4}]-[\w{12}])" | table ABC _raw (I used a named field ABC, it gets cut from this post) FROM text: .sessionid=d2a4f0de-747f-413c-a823-03ee7d241d5b&hash The GOAL: ABC = d2a4f0de-747f-413c-a823-03ee7d241d5b
I have a splunk query that gives me the different values of an appid and csv file which has a single field called appid .I want to write a query which will give the appid that is not there in csv but... See more...
I have a splunk query that gives me the different values of an appid and csv file which has a single field called appid .I want to write a query which will give the appid that is not there in csv but in the search. Thanks in advance
Is there any (more detailed) documentation and/or other information that can be shared re: the REST API for querying data on the attributes of Synthetic Monitoring Sessions that have been executed?  ... See more...
Is there any (more detailed) documentation and/or other information that can be shared re: the REST API for querying data on the attributes of Synthetic Monitoring Sessions that have been executed?  There is the one community article titled "How do I enable or disable synthetic jobs programmatically using the API?"  https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-enable-or-disable-synthetic-jobs-programmatically-using/ta-p/23455 which provides a working use case, but it is limited to one specific REST operation.  What are our additional options?  I am interested specifically, in pulling the log that would be displayed for any one Session by clicking on the button labelled "Show Script Output".  Want to do that to extract lines matching a regex pattern we would supply, pull out one or more data points resulting from calculations that we perform in the script and write to the session log.  Want to do that on a bulk basis, rather than perform on an individual copy-and-paste out of each log.
after upgrading to 8.0.2 from 7.3.1, splunkweb won't start. after I remove the search activity app it starts again.
after upgrading to 8.0.2 from 7.3.1, splunkweb won't start. after I remove the search activity app it starts again.
**Hi All, I need help extracting {0000000-0000-0000-0000-000000000000} and {0000000-0000-0000-0000-000000000000} from the log sample below during search. This is what i have so far: sourcetype=wi... See more...
**Hi All, I need help extracting {0000000-0000-0000-0000-000000000000} and {0000000-0000-0000-0000-000000000000} from the log sample below during search. This is what i have so far: sourcetype=wineventlog EventCode="4662" Account_Name="\$" Access_Mask=0x100 (Object_Type="%{19195a5b-6da0-11d0-afd3-00c04fd930c9}" OR ObjectT_ype="domainDNS") | rex field=Message "Properties: (?P[^\s]+) {1131f6ad-9c07-11d1-f79f-00c04fc2dcd2} " | rex field=Message "Properties: (?P[^\s]+) {9923a32a-3607-11d2-b9be-0000f87a36b2} " | rex field=Message "Properties: (?P[^\s]+) {1131f6ac-9c07-11d1-f79f-00c04fc2dcd2} " Please help me fix this search.* LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4662 EventType=0 Type=Information ComputerName=gghasfv.net TaskCategory=Directory Service Access OpCode=Info RecordNumber=0000000 Keywords=Audit Success Message=An operation was performed on an object. Subject : Security ID: S-1-5-21-0000000-0000-0000-0000-000000000000 Account Name: NAME$ Account Domain: GOAL Logon ID: GOAL Object: Object Server: DS Object Type: %{0000000-0000-0000-0000-000000000000} Object Name: %{0000000-0000-0000-0000-000000000000} Handle ID: Operation: Operation Type: Object Access Accesses: Control Access Access Mask: 0x100 Properties: Control Access {0000000-0000-0000-0000-000000000000} {0000000-0000-0000-0000-000000000000} Additional Information: Parameter 1: Parameter 2
Hi team! Shuold I upgrade my universal forwarders when after I upgrade my HF? Data > UF > HF > Indexer Right now all is in 6.5.2 version. Indexer and HF will be in 7.3.4 soon. Thanks! ... See more...
Hi team! Shuold I upgrade my universal forwarders when after I upgrade my HF? Data > UF > HF > Indexer Right now all is in 6.5.2 version. Indexer and HF will be in 7.3.4 soon. Thanks! Salut
Anything wrong with this join and subsearch? I know there are events which should match based on the 'cs_host' field. Not sure if the rename is confusing things, or my syntax is off slightly. in... See more...
Anything wrong with this join and subsearch? I know there are events which should match based on the 'cs_host' field. Not sure if the rename is confusing things, or my syntax is off slightly. index=aaa sourcetype=o365 "*phish URLs*" | rename zu as cs_host | join type=inner cs_host [ | search index=bbb sourcetype="proxy_logs" | fields *] | stats count by cs_host Appreciate any ideas.
while trying to create a new index in search head getting error like Invalid apply cluster-bundle error="Bundle validation is in progress
Hi, I have windows XML logs in input of my Heavy Forwarder (via the universal forwarder with the TA_windows). When I send this event through Syslog I can see that some events split due to the c... See more...
Hi, I have windows XML logs in input of my Heavy Forwarder (via the universal forwarder with the TA_windows). When I send this event through Syslog I can see that some events split due to the carriage return. Exemple: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/><EventID>4672</EventID><Version>0</Version><Level>0</Level><Task>12548</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2020-03-10T16:15:47.000184600Z'/><EventRecordID>157445</EventRecordID><Correlation/><Execution ProcessID='516' ThreadID='2448'/><Channel>Security</Channel><Computer>MININT-5B0409J.test.lan</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>MININT-5B0409J$</Data><Data Name='SubjectDomainName'>TEST</Data><Data Name='SubjectLogonId'>0x1bda93a</Data><Data Name='PrivilegeList'>SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeSystemEnvironmentPrivilege SeLoadDriverPrivilege SeImpersonatePrivilege SeDelegateSessionUserImpersonatePrivilege SeDebugPrivilege SeEnableDelegationPrivilege</Data></EventData></Event> My syslog message looks like this: <13> MININT-5B0409J <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/><EventID>4672</EventID><Version>0</Version><Level>0</Level><Task>12548</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2020-03-10T16:15:47.000184600Z'/><EventRecordID>157445</EventRecordID><Correlation/><Execution ProcessID='516' ThreadID='2448'/><Channel>Security</Channel><Computer>MININT-5B0409J.test.lan</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>MININT-5B0409J$</Data><Data Name='SubjectDomainName'>TEST</Data><Data Name='SubjectLogonId'>0x1bda93a</Data><Data Name='PrivilegeList'>SeSecurityPrivilege Here is the conf I use: props.conf [host::MININT*] TRANSFORMS-orano = windows-compagny MAX_TIMESTAMP_LOOKAHEAD = 16 SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = <Event xmlns LINE_BREAKER = ([\r\n]+)(?=<Event xmlns) transforms.conf [windows-compagny] REGEX = . #REGEX = <Event ((\S|\s)*?)<\/Event> DEST_KEY = _SYSLOG_ROUTING FORMAT = syslog_data outputs.conf [syslog:syslog_data] maxEventSize = 9999999 server = 192.168.1.10:515 type = tcp If someone got an idea. Thanks in advance.
Hi, I have a ask where I need to find out top 100 URL's who have hourly hits more than 50 on the server means if a particular URL is requested more than 50 times in an hour then I need to list it. ... See more...
Hi, I have a ask where I need to find out top 100 URL's who have hourly hits more than 50 on the server means if a particular URL is requested more than 50 times in an hour then I need to list it. And I need to list these kind of top 100 URL's which are most visited. Any help is appreciated. Below is the query I have but it is not giving what i want - index=temp_index source="/app/request.log" host="server-1b*" GET | rex field=_raw "GET (?[^\s]+)" | bucket span=1h _time | stats count as hour_count by _time requested_content
Hi, I want tu use syslog-ng to send windows logs from a heavy forwarder to an indexer. But I got a problem, the message is truncated to the first 1kb of data (due to the RFC). Do I have any soluti... See more...
Hi, I want tu use syslog-ng to send windows logs from a heavy forwarder to an indexer. But I got a problem, the message is truncated to the first 1kb of data (due to the RFC). Do I have any solution to send my message through syslog without being truncated? Thanks in advance.
Hello everyone, I am curious to what others have experienced with the Nexpose TA. We have had many discussions with there support and our account reps and were never able to get our nexpose dashboard... See more...
Hello everyone, I am curious to what others have experienced with the Nexpose TA. We have had many discussions with there support and our account reps and were never able to get our nexpose dashboard to mirror what's actually on the servers. As from our discussion with their support and SME's, they talked about how the TA functions by signaling the nexpose box to query the insightVM agents that are both accessible at that time and have new updates. My theory is that this query is no different then a normal query that nexpose invokes itself. Meaning if your cron from Splunk is to signal Nexpose everyday at 4:00 and Nexpose internally runs a query of the agents at 3:00 then you will only receive the delta from 3-4 in Splunk. If my theory is correct, then nexpose queried at 3 it will only forward logs from the machines that have new updates from 3-4. Right now our experience is that when we search over 24 hours, we only see a fraction of the assets and vulnerabilities we have. If we look over 30 days, we get much more accurate asset counts, but then we will also see legacy vulnerabilities and assets. What would be great is if the TA itself queried nexpose's database and received the entire table on a daily basis. Purging after x days. this way whenever we launch the app and look over 24 hours we are getting the full asset and vulnerabilities counts and types. Any thoughts, ideas or experience that would be either prove otherwise of what I stated and/or a workaround for our issue? I appreciate your time! Thanks, Christian
I am displaying on a counter a value that basically counts the times a login has failed, but I would like to get an Email, every time that counter goes over 5, so that way I could monitor better what... See more...
I am displaying on a counter a value that basically counts the times a login has failed, but I would like to get an Email, every time that counter goes over 5, so that way I could monitor better what is going on or if it is an attack. Thank you!
After running the Splunk Platform Upgrade Readiness App, 1 of many add-ons / apps on just 1 Splunk Enterprise server that did not pass was the Splunk Add-on for Cisco ISE newest version 3.0.0. Just... See more...
After running the Splunk Platform Upgrade Readiness App, 1 of many add-ons / apps on just 1 Splunk Enterprise server that did not pass was the Splunk Add-on for Cisco ISE newest version 3.0.0. Just this 1 app has 117 python scripts that need to be made compatible, "Required Action: Update these Python scripts to be dual-compatible with Python 2 and 3." Does Splunk have a conversion script which can be run, for any app, to convert Python 2 scripts to be dual-compatible? If no, then will the Add-on for Cisco ISE be Python 3 compatible? Also, when will all the other Splunk built add-ons and apps be Python 3 compatible so upgrading to Splunk 8 can proceed?
Hello all, Tring to set up an alert when hosts have w3svc service --But aren't producing actual logs. Any ideas would be much appreciated. Thanks!