All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking it a weird issue where I am trying to fix one of the panels in a dashboard, The panel has a query like below index=<index> sourcetype=log4j host=$host$  <Extracted field> != NULL | ... See more...
I am looking it a weird issue where I am trying to fix one of the panels in a dashboard, The panel has a query like below index=<index> sourcetype=log4j host=$host$  <Extracted field> != NULL | timechart span=1m count by <Extracted field> issue is we are getting inaccurate counts as this part "<Extracted field> != NULL"  in the above query is filtering out majority of the events, and when we are trying to see which events are filtered by using "<Extracted field> = NULL" we are not seeing any events. How does splunk treat extracted fields which are NULL or in what situations these fields end up as NULL. Any suggestions for the above issue? Thanks in advance!
Hey, has anyone created a search that merges an ipadd from threat intel and ipadd from azure so it'll trigger an alert if there's a match. Don't know if it's possible. Thanks, will appreciate any hel... See more...
Hey, has anyone created a search that merges an ipadd from threat intel and ipadd from azure so it'll trigger an alert if there's a match. Don't know if it's possible. Thanks, will appreciate any help or advise. I am new to ES  
I've set up some tables in DB Connect, using a timestamp (date_modified) as a rising column (there were no other suitable fields, and it's COTS, so the vendor isn't going to add a unique value).  The... See more...
I've set up some tables in DB Connect, using a timestamp (date_modified) as a rising column (there were no other suitable fields, and it's COTS, so the vendor isn't going to add a unique value).  The where column is "where date_modified > ?". The issue is that each time the input is run, it creates a new event for the most recent row.  I would expect that to happen is the where clause said "where date_modified >= ?", but with it set as "where date_modified > ?", I would think it would just add no rows. Has anyone else seen this?
I'm having trouble with using the where command to compare times. The search that I'm running is this:       index=jamf sourcetype=JamfModularInput "computer.general.last_contact_time_epoch"=* "c... See more...
I'm having trouble with using the where command to compare times. The search that I'm running is this:       index=jamf sourcetype=JamfModularInput "computer.general.last_contact_time_epoch"=* "computer.general.last_contact_time_epoch"!=0| dedup computer.pagination.serial_number | rename computer.general.last_contact_time_epoch as checkinepoch | eval thirtydays=relative_time(now(),"-30d") | rename computer.general.last_contact_time as "Last Check-In" | where "thirtydays">"checkinepoch" | table thirtydays,checkinepoch,"Last Check-In"   The problem I have is that it returns no results with the where command being using less than (<), and then if I use greater than (>) it returns all of the results without filtering the ones that I want. Here is an example of the output with that search: As you can see I am getting results returned where checkinepoch is larger than thirtydays. Does the where command treat the decimal in the thirtydays number as a multiplation operator (like x*y = xy)? The effect of this could be that it calculates that value as 1634051921 * 000000 = 0  Super confused by this please help!
Hello, thank you for taking the time to read and consider my question.  I'm trying to integrate a .json file which contains a list of suspicious domain's into a scheduled search that compares that... See more...
Hello, thank you for taking the time to read and consider my question.  I'm trying to integrate a .json file which contains a list of suspicious domain's into a scheduled search that compares that data with a field that contains destination urls for web traffic.  I've already designated an index and sourcetype of the suspicious urls (which will be updated daily, so neither of these are static files with any predictable or constant values).  What I'm looking to do now is basically ingest the dest_hostname field from the web traffic as well as bad_domains field from the .json file and find any matching/common fields among them.  Here's an example of the data and what I would like to accomplish: dest_hostname                bad_domain            matched_url facebook.com                   reddit.com                amazon.com amazon.com                       amazon.com           splunk.com google.com                          splunk.com  splunk.com                          nfl.com Once again, thank you for taking the time to read this, and any ideas or solutions would be greatly appreciated!
Hey all, a bit Microsoft question.... We do want to monitor windows Group Policy changes in our Domain. We have installed Splunk  Add-On and App for exchange and Active directory, and also the rele... See more...
Hey all, a bit Microsoft question.... We do want to monitor windows Group Policy changes in our Domain. We have installed Splunk  Add-On and App for exchange and Active directory, and also the relevant content-packs containing some reports about this. We do get event But..... we have also an installed and configured AGPM (Advanced group Policy management, Microsoft Software).Under the terms of that software, Microsoft Advanced Group Policy Management (AGPM) is a client/server application. The AGPM Server stores Group Policy Objects (GPOs) offline in the archive that AGPM creates on the server's file system. Group Policy administrators use the AGPM snap-in for the Group Policy Management Console (GPMC) to work with GPOs on the server that hosts the archive. and also a Few terms: Controlled GPO: A GPO that is being managed by AGPM. AGPM manages the history and permissions of controlled GPOs, which it stores in the archive. Uncontrolled GPO: A GPO in the production environment for a domain and not managed by AGPM.   When you edit a GPO using the AGPM system, you work on a copy of the original GPO. As a result, the Windows Event logs in the Domain Controllers are reporting on a different Object. Thus, the Splunk reports and event types of group policy change can't figure out which GPO is being changed (since the AGPM renames it and create a "new" one) So, after all these words....Is someone can help us find a proper application to monitor and view GPO changes via AGPM in splunk?  did someone encountered this before? Is such product exists? and if there is no other choice - help us to write new searches to catch up GPO changes in AGPM?  Thankx Auto Team
Hi All, I need splunk query to identify orders which are ordered but not submitted even after 72 hours Any one help me on this Thanks          
I have a search that displays unique users per day (based on a "user id" field). I also would like another search that displays "new" unique users per day, looking back to some fixed date. I suspect ... See more...
I have a search that displays unique users per day (based on a "user id" field). I also would like another search that displays "new" unique users per day, looking back to some fixed date. I suspect maybe I need a sub-search using "earliest" and "latest", but I don't know how to put it together.
Team Can you please provide me documentation link to learn Splunk UBA platform and related links for monitoring, developing, architecture and installation, etc.,  Thank You. 
Greetings all. I have an app, let's call it "servers" that is deployed on multiple hosts. I can see in Deployment that the app deployed OK to it's specified Server Class, let's call it "nix_servers"... See more...
Greetings all. I have an app, let's call it "servers" that is deployed on multiple hosts. I can see in Deployment that the app deployed OK to it's specified Server Class, let's call it "nix_servers". There is only one app deployed into this server class ("servers" app deployed to "nix_servers" server class). I'm trying to get a list of all hosts (clients) that have "server" app deployed on them, regardless if those hosts (clients) are in the server class "nix_servers" or not. Is there any way to get this list exported in a .csv (or any other format) file? Thanks!
Hello,   We upgraded Microsoft Azure Add on for Splunk to the latest version 3.2.0 After the upgrade, we started seeing the following errors: From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/... See more...
Hello,   We upgraded Microsoft Azure Add on for Splunk to the latest version 3.2.0 After the upgrade, we started seeing the following errors: From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS-AAD/bin/TA_MS_AAD_rh_settings.py persistent}: "Failed to get password of realm=%s, user=%s." % (self._realm, user) From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS-AAD/bin/TA_MS_AAD_rh_settings.py persistent}: File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/utils.py", line 148, in wrapper From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS-AAD/bin/TA_MS_AAD_rh_settings.py persistent}: . From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS-AAD/bin/TA_MS_AAD_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS-AAD/bin/TA_MS_AAD_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA-MS-AAD#configs/conf-ta_ms_aad_settings, user=proxy.   I tried to add again the credentials and re-create the inputs, but still getting them.   We are getting the logs, but I'm not sure if this errors is impacting us/if we are getting all the logs or how should we correct it.   Thank you,   Andreea
Hi,   I 'm new to Splunk, but I need some answers pretty fast. We are invited to insource Infrastructure monitoring and control from a high secure environment. As we are outside customers domain, o... See more...
Hi,   I 'm new to Splunk, but I need some answers pretty fast. We are invited to insource Infrastructure monitoring and control from a high secure environment. As we are outside customers domain, obviously the dashboard runs on servers outside customers infrastructure. Of course there needs to be communications between agents running in the infrastructure and the dashboard to upload events and monitoring data.  However,  it is absolutely a requirement from customer there is NO traffic from the dashboad to the agents on his infrastructure. Upload of data is no problem, but any packet downstream will be blocked. Even "keep alive" traffic. Is anyone experienced to give me an answer on this?   Thanks, Wim
Hi, We are having bunch of HFs in our environment from HFs we have confusion from which HF is getting the data, so to find easily we have to right stanza in props.conf in defaults/local. So based on... See more...
Hi, We are having bunch of HFs in our environment from HFs we have confusion from which HF is getting the data, so to find easily we have to right stanza in props.conf in defaults/local. So based on this what is the stanza i can right Example fields: splunk_HF                                 indextime  
Hello, I am trying to timechart two event types ONLY: heartbeat and start.  However, every event in our Splunk is also mapped as nix-all-logs and few other events by the system admin.  Attached are... See more...
Hello, I am trying to timechart two event types ONLY: heartbeat and start.  However, every event in our Splunk is also mapped as nix-all-logs and few other events by the system admin.  Attached are screenshots.  How can I timechart these 2 event types only.
Hello everyone, I found due DMC console, that splunk stopped get logs into _introspection index. I open search like this "index=_introspection sourcetype=splunk_resource_usage component=PerProcess ... See more...
Hello everyone, I found due DMC console, that splunk stopped get logs into _introspection index. I open search like this "index=_introspection sourcetype=splunk_resource_usage component=PerProcess host=*" and see that there were events, but they stopped. Can anybody help me with this problem?
I am forwarding some json files from a splunk forwarder on linux, example file below: { "dateTime" : "04/11/2021 08:22:30", "functionName" : "ZAUTOPSRALL", "userId" : "sanchez", "issueCategory" ... See more...
I am forwarding some json files from a splunk forwarder on linux, example file below: { "dateTime" : "04/11/2021 08:22:30", "functionName" : "ZAUTOPSRALL", "userId" : "sanchez", "issueCategory" : "PSR", "issueType" : "HDRUNKNOWN", "issueSummary" : "PSR File Processing â\u0080\u0093 Cannot match to original file", "issueDescription" : "The received PSR file &quot;PSR_CBD174.PAIN001_DTLRJCT3.xml&quot; refers to an unknown original file.\n\nPSR file\nName: PSR_CBD174.PAIN001_DTLRJCT3.xml\nCreated: 2021-10-08T12:09:43+01:00\nMessage ID: LBG/0000000027834/003\n\nReference to original file\nMessage ID: MSGID/PAIN001/20210913T100930/1\nStatus: RJCT\nControl sum: 38965.82\nNumber of transactions: 86", "exceptionType" : null, "notificationId" : null, "timeStamp" : 1636014150661056 } Its not being indexed, i found the following errors for this fle in the splunkd.log   I ran the json through a json checker and it was valid so not sure why splunk is complaining.  Any help would be much apreciated. 11-05-2021 15:48:57.625 +0000 ERROR JsonLineBreaker [10224113 structuredparsing] - JSON StreamId:14224088848725967690 had parsing error:Unexpected character while parsing backslash escape: 'x' - data_source="/sanchez/instances/beta/log/splunk/splunk_1636014150661056_19399032.json", data_host="pbasalsldw002", data_sourcetype="_json" 11-05-2021 15:48:57.625 +0000 ERROR JsonLineBreaker [10224113 structuredparsing] - JSON StreamId:14224088848725967690 had parsing error:Unexpected character: ':' - data_source="/sanchez/instances/beta/log/splunk/splunk_1636014150661056_19399032.json", data_host="pbasalsldw002", data_sourcetype="_json" 11-05-2021 15:48:57.625 +0000 ERROR JsonLineBreaker [10224113 structuredparsing] - JSON StreamId:14224088848725967690 had parsing error:Unexpected character: ':' - data_source="/sanchez/instances/beta/log/splunk/splunk_1636014150661056_19399032.json", data_host="pbasalsldw002", data_sourcetype="_json" 11-05-2021 15:48:57.625 +0000 ERROR JsonLineBreaker [10224113 structuredparsing] - JSON StreamId:14224088848725967690 had parsing error:Unexpected character: ':' - data_source="/sanchez/instances/beta/log/splunk/splunk_1636014150661056_19399032.json", data_host="pbasalsldw002", data_sourcetype="_json" 11-05-2021 15:48:57.625 +0000 ERROR JsonLineBreaker [10224113 structuredparsing] - JSON StreamId:14224088848725967690 had parsing error:Unexpected character in string: '\0A' - data_source="/sanchez/instances/beta/log/splunk/splunk_1636014150661056_19399032.json", data_host="pbasalsldw002", data_sourcetype="_json"
Hi All, I am using Website Monitoring in one of our HF.But whenever i run sourcetype=web_ping query in the search bar, splunk PID's increases suddently and it stops the splunk service on HF. Please ... See more...
Hi All, I am using Website Monitoring in one of our HF.But whenever i run sourcetype=web_ping query in the search bar, splunk PID's increases suddently and it stops the splunk service on HF. Please suggest me where am i going wrong/Help me to fix the issue We are monitoring around 114 URL's with below sample of inputs.conf [web_ping://SOMAN] interval = 2m title = SOMAN url = http://sapsoman.www.com:5030/startPage user_agent = Splunk Website Monitoring (+https://splunkbase.splunk.com/app/1493/) configuration = default website_monitoring.conf [default] max_response_body_length = 1000 proxy_port = 312 proxy_server = proxy.conexus.svc.local proxy_type = http thread_limit = 100 Error: ERROR [618cf90fac7eff7b2b1290] config:146 - [HTTP 401] Client is not authenticated Traceback (most recent call last): File "/opt/app/splunk/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/config.py", line 144, in getServerZoneInfoNoMem return times.getServerZoneinfo() File "/opt/app/splunk/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/times.py", line 163, in getServerZoneinfo serverStatus, serverResp = splunk.rest.simpleRequest('/search/timeparser/tz', sessionKey=sessionKey) File "/opt/app/splunk/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 553, in simpleRequest raise splunk.AuthenticationFailed splunk.AuthenticationFailed: [HTTP 401] Client is not authenticated   @LukeMurphey @Anonymous 
We tried to install splunk 8.1.0 and after untarring the file tried to start splunk both as root and splunk user via /opt/splunk/bin/splunk start  Error comes up as execve: Operation not permitted ... See more...
We tried to install splunk 8.1.0 and after untarring the file tried to start splunk both as root and splunk user via /opt/splunk/bin/splunk start  Error comes up as execve: Operation not permitted while running command /opt/splunk/bin/splunkd Any urgent help is appreciated
Hi I have an issue that Splunk might be help to solve it. Here is scenario: Need to find unusual send and receive patterns in huge log file, here is the example:   00:00:01.000     S-001 00:00:... See more...
Hi I have an issue that Splunk might be help to solve it. Here is scenario: Need to find unusual send and receive patterns in huge log file, here is the example:   00:00:01.000     S-001 00:00:01.000     S-002 00:00:01.000     S-003 00:00:01.000     S-004 00:00:01.000     S-005 00:00:01.000     R-005 00:00:01.000     S-006 00:00:01.000     R-006 00:00:01.000     S-007 00:00:01.000     S-008 00:00:01.000     R-008 00:00:01.000     R-007 00:00:01.000     S-009 00:00:01.000     S-010 00:00:01.000     S-011 00:00:01.000     S-012 00:00:01.000     S-013 00:00:01.000     R-009 00:00:01.000     R-010 00:00:01.000     R-011 00:00:01.000     R-012 00:00:01.000     R-013 00:00:01.000     S-014 00:00:01.000     R-014 00:00:01.000     R-001 00:00:01.000     R-002 00:00:01.000     R-003 00:00:01.000     R-004   red line need to detect and show on chart. FYI1: Duration is not good way to find them because some of them occurred at the exact time. FYI2: ids are different not in order as i write above like this 98734543 or 53434444   any idea? Thanks,
Hi, I have to run python script as an alert action. My Splunk is on windows. I tried my script running like this and its working. Its very basic hello world script. C:\Program Files\Splunk\bin... See more...
Hi, I have to run python script as an alert action. My Splunk is on windows. I tried my script running like this and its working. Its very basic hello world script. C:\Program Files\Splunk\bin>splunk cmd python hello_world.py This message will be displayed on the screen. commands.conf   [hello_world] filename = hello_world.py I have placed commands.conf in C:\Program Files\Splunk\etc\apps\search\local and C:\Program Files\Splunk\etc\system\local when I am trying running this script from command line  its not working. | script python hello_world  OR | script hello_world Error Message:  Error in 'script' command: Cannot find program 'hello_world' or script 'hello_world'. Not sure why its not be able to find the script. I have placed it to multiple location. $SPLUNK_HOME$\etc\apps\search\bin\scripts\hello_world.py $SPLUNK_HOME$\bin\hello_world.py   (from command line it take this script) My ultimate goal is to run this script as an alert action. but I dont think there is option to run python script. I have option as run a script but seems like that is only for shell script. Thanks