All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to parse local log files and add the date to the body of the post request, but not exactly certain what is the best date form at to use?  Can someone please provide some example options? Than... See more...
I want to parse local log files and add the date to the body of the post request, but not exactly certain what is the best date form at to use?  Can someone please provide some example options? Thank You, Mark $params = @{ Uri = 'https://prd-p.splunkcloud.com:8088/services/collector' Method = 'POST' Headers = @{ Authorization = 'Splunk 2caf8cde' } Body = @{ index = 'job1' sourcetype = '_json' event = @{ name1 = "value1" name2 = "value2" array1 = @( "value1" "value2" ) } } | ConvertTo-Json } Invoke-RestMethod -SkipCertificateCheck @params
Hello Splunkers Below is the screenshot in which i have created one hidden panel in which level_department is the set token I have created. The token $level1$ refererd here which brings the departm... See more...
Hello Splunkers Below is the screenshot in which i have created one hidden panel in which level_department is the set token I have created. The token $level1$ refererd here which brings the department level related information that all is working fine.  In the below panel after $cat$ when i used to refer $level1_department$ ( the above panel token). That is not working fine. Can someone help me what is the issue in this and what things I need to correct here to refer token $level1_department$ in the below query.    
Hi Everyone, I am desperately seeking help for my new query in SPLUNK. The search result will look like the below:         "pluginid","alertRef","alert","name","riskcode","confiden... See more...
Hi Everyone, I am desperately seeking help for my new query in SPLUNK. The search result will look like the below:         "pluginid","alertRef","alert","name","riskcode","confidence","riskdesc","confidencedesc","desc","instances","count","solution","otherinfo","reference","cweid","wascid","sourceid" "100001","100001","Unexpected Content-Type was returned","Unexpected Content-Type was returned","1","3","Low (High)","High","<p>A Content-Type of text/html was returned by the server.</p><p>This is not one of the types expected to be returned by an API.</p><p>Raised by the 'Alert on Unexpected Content Types' script</p>","System.Xml.XmlElement","933","","","","-1","-1","20420" "100000","100000","A Client Error response code was returned by the server","A Client Error response code was returned by the server","0","3","Informational (High)","High","<p>A response code of 401 was returned by the server.</p><p>This may indicate that the application is failing to handle unexpected input correctly.</p><p>Raised by the 'Alert on HTTP Response Code Error' script</p>","System.Xml.XmlElement","2831","","","","388","20","70"       My aim is to have a table in Splunk that can categorize each the value with the new field. For example:       pluginid alertRef alert 100001 100001 Unexpected Content-Type was returned","Unexpected Content-Type was returned 100000 100000 A Client Error response code was returned by the server       So my regex should be able to read all the new line inside the csv search result.. My current solution is not really capable (as it only read single line, not multiple lines) as you can see below (I skipped the column name) :     ^"\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+"\s+"(?P<plugin_id>\d+)","(?P<alert_ref>\d+)     Please help me to get the regex able to read all the new line in my CSV search result
I have a very noisy app log. I want to use Splunk's indexer to filter only relevant data and index them. Basically I need to match a string 'Error', only forward the matched line and the line precedi... See more...
I have a very noisy app log. I want to use Splunk's indexer to filter only relevant data and index them. Basically I need to match a string 'Error', only forward the matched line and the line preceding that one for indexing. In other words, I need to do a grep and a grep -B1 for the string Error. Then, I only want to index those events using Splunk's indexer filtering. How do I do that?   Example: I have this log data INFO: Task1 INFO: OK INFO: Task 2 ERROR: exception xyz   Here, I only want to capture and index this: INFO: Task 2 ERROR: exception xyz
Hi all! I have been absolutely stumped by this and hoping you can help me out. I am trying to find users that have 2 different, distinct events that happen on the same day. One event can occur at a... See more...
Hi all! I have been absolutely stumped by this and hoping you can help me out. I am trying to find users that have 2 different, distinct events that happen on the same day. One event can occur at any time of the day, and the second event occurs between 6-8 am. The closest I have gotten is: index=Info source=Trustme (EventCode=X OR EventCode=Y) | eval hour=tonumber(strftime(_time,"%H")) | where hour>=8 OR hour<0 | stats values(EventCode) as Event_Codes by User | search Event_Codes=X Event_Codes=Y This is clipping out users who have Event Y occur outside of that range, which I would like to avoid. Also, I want to cast this over a large period to test and make sure I'm capturing the right people, then I can hopefully set it up as an alert. Any help would be greatly appreciated!
Hi, in a Linux server, a UF is configured to monitor a log directory, and it stops sending data to the indexer after about 2 minutes. When I restart the UF from the deployment server, it will start s... See more...
Hi, in a Linux server, a UF is configured to monitor a log directory, and it stops sending data to the indexer after about 2 minutes. When I restart the UF from the deployment server, it will start sending data and then stop sending. Other inputs configuration like running scripts are working fine, and there is no error or warning in the _internal index about this host. Do you have any idea about this problem?
Hi all - I am trying to exclude matching results from a lookup and can't get it to work. I've tried multiple searches, tried what I've found in Splunk Answers, and I just can't get this to work. H... See more...
Hi all - I am trying to exclude matching results from a lookup and can't get it to work. I've tried multiple searches, tried what I've found in Splunk Answers, and I just can't get this to work. Here's what I have right now:   | inputlookup myinputlookup1 | search NOT [ |lookup my_lookup InLookField AS LookField OUTPUT InLookField]   This search runs but produces no results. What am I doing wrong? 
Hi, We are using both Splunk Cloud and Splunk Enterprise. We recently came across some issues/differences in search we originally thought were due to indexed field issues but turned out to be more ... See more...
Hi, We are using both Splunk Cloud and Splunk Enterprise. We recently came across some issues/differences in search we originally thought were due to indexed field issues but turned out to be more about some basic difference in how each environment converts a search into lispy (at least that is what we observe). For example in Splunk Cloud 8.2.2203.4 the following search:   index=_internal some_field=some-value   Results in the following lispy:   [ AND index::_internal [ OR some_field::some-value [ AND some value ] ] ]     For our Splunk Enterprise 8.2.6 the same search results in the following lispy:   [ AND index::_internal some value ]     In our case `some_field` is an index field added on by our HEC requests. This results in very incorrect searches in enterprise and inefficient searches in cloud. We do now realize we can just directly query for "some_field::some_value" but we would like to understand this behavior difference and if it is configurable.   Thanks
Hello All, I am relatively new to splunk and I am trying to search using sets. Sets here refers to a group of values that I import into splunk and then search the logs from a data source for value... See more...
Hello All, I am relatively new to splunk and I am trying to search using sets. Sets here refers to a group of values that I import into splunk and then search the logs from a data source for values that match any of the values in the set. Something like a reference set in Qradar. The usecase I am trying to implement is an alert for blacklisted applications. I have a .csv file that contains two columns, application name & application category. I want to import this data into Splunk and then use the values in the application name column to search against the processName field of the logs from the endpoint security solution.  How do I achieve this on Splunk? I have read through the documentation for lookup but I did not understand how it would help me achieve my objective.
Is it possible to extract a field across multiple indexes and multiple sourcetypes?
Hello,  Can anyone help me out with deployment server and UF architecture.   In our environment we are facing issues pertaining to deployment server to Uf connection. for some DS - UF the conne... See more...
Hello,  Can anyone help me out with deployment server and UF architecture.   In our environment we are facing issues pertaining to deployment server to Uf connection. for some DS - UF the connection is voluntarily being disconnected. What could be the issue?   Thank you for the time.
Initially we were using splunk enterprise to log our real time logs. But few days before we have moved onto splunk cloud for logging. And also have migrated all the alerts and dashboards from splun... See more...
Initially we were using splunk enterprise to log our real time logs. But few days before we have moved onto splunk cloud for logging. And also have migrated all the alerts and dashboards from splunk enterprise to splunk cloud. Now we are observing that there is a lag in logs sent to splunk cloud. As logs are getting delayed for few minutes on splunk cloud compared to splunk enterprise. Need to understand what's the reason can someone please guide
I am looking to monitor permissions for anything that has the everyone permission that is created by a normal user with the following query: | rest /servicesNS/-/search/directory | search author!=... See more...
I am looking to monitor permissions for anything that has the everyone permission that is created by a normal user with the following query: | rest /servicesNS/-/search/directory | search author!=nobody AND author!=admin | fields title, eai:type, author, disabled, eai:acl.perms.read, eai:acl.perms.write | rename eai:* AS *, acl.* AS * | where 'perms.write'="*" OR 'perms.read'="*" | search NOT type IN (views, commands, macros, *lookup*, *extract*, fvtags, fieldaliases, nav, eventtypes) Which is fine but what I really want to do is then have it change those everyone permissions to our base user (which is the inheritance point for all other groups and all users are put in it).  So I want to make it as a self remediating alert.   Thing is I can't find anything on changing permissions with the splunk query language and being a cloud instance I don't have access to the cli.
in classic dashboards - I could schedule a pdf of a dashboard to be sent to me daily but I do not see that option in dashboard studio? where is that option??   Rich
I am trying to use Splunk to review windows logs that were exported from machines that are not on a network.  I have copied the .evtx files to my Splunk machine. Fresh install of Splunk 9.0.1 Bel... See more...
I am trying to use Splunk to review windows logs that were exported from machines that are not on a network.  I have copied the .evtx files to my Splunk machine. Fresh install of Splunk 9.0.1 Below is the process I used to try to get the events indexed.  I think this is the same process I have used in the past but for some reason no events are indexed.    1. Settings --> New Index 2. Enter Name for index 3. Save 4. Settings --> Data Inputs 5. Files and Directories 6. New Local File and Directory 7. Input Path of top level folder containing logs 8. Select Continuously Monitor 9. Next 10. Source Type: Automatic 11. App Context: Search and reporting 12. Select Constant Value 13. Host field name:* 14. Select Index Created in steps 1- 3 15. Review 16. Start Searching   The search results in 0 events listed.  I delete everything from the search box except index="nameofindex" and still there are no events listed.
Hi, we would like to monitor authentication attempts in our SMTP server (Exchange 2016) but I could not find a way to do so. We already have IT Essentials with the Exchange Content Pack but SMTP ... See more...
Hi, we would like to monitor authentication attempts in our SMTP server (Exchange 2016) but I could not find a way to do so. We already have IT Essentials with the Exchange Content Pack but SMTP does not seem to be covered. Then I took a look into the "Splunk Add-on for Microsoft Exchange" which contains the TA-Exchange-HubTransport which monitors the following path but I checked those logs on my server and they did not contain any authentication event.  [installation drive] :\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking Have anyone managed to find those events and ingest them in Splunk somehow? Our final goal is to find public IPs trying to brute force our SMTP server. Thanks a lot.
Greetings all,  Regarding server classes and clients. If I remove conditions from the includelist of a Server Class, will that also automatically remove the clients matching those conditions from... See more...
Greetings all,  Regarding server classes and clients. If I remove conditions from the includelist of a Server Class, will that also automatically remove the clients matching those conditions from the Server Class or will I still need to manually remove those clients ? Thanks !
Hello,  I have installed a Eventgen App from Splunk base to our Heavy forwarder. The following are the details SA-EVENTGEN Version :- 7.2.1 Splunk Version :- 8.1.1 After installation. I am se... See more...
Hello,  I have installed a Eventgen App from Splunk base to our Heavy forwarder. The following are the details SA-EVENTGEN Version :- 7.2.1 Splunk Version :- 8.1.1 After installation. I am seeing the follow error messages related to the eventgen   ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" SyntaxError: Missing parentheses in call to 'print'. Did you mean print('\n\nCaught kill, exiting...')? ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" print '\n\nCaught kill, exiting...' ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" File "/cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py", line 113   Does anyone face the similar issue with SA-Eventgen? Looks like the Python version in Eventgen is no compatible or having some issues.  Any idea on how to troubleshoot or resolve these issues?
The app is on splunkbase published by splunklabs/splunkworks. Splunk Add-on for Microsoft Office 365 Reporting Web Service | Splunkbase The app was recently updated to use Advanced auth  but it req... See more...
The app is on splunkbase published by splunklabs/splunkworks. Splunk Add-on for Microsoft Office 365 Reporting Web Service | Splunkbase The app was recently updated to use Advanced auth  but it requires  GLOBAL READER role which is really a big deal for us, also it doesn't seem appropriate  for the purpose we are using the app for(just reading web mails), is there any way we could make it work with less privilege ? The error i receive on the HF where i have installed this app - 09-19-2022 09:20:40.397 -0500 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py" 403 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2022-09-14T09:20:39.496008Z'%20and%20EndDate%20eq%20datetime'2022-09-14T10:20:39.496008Z It would be really appreciated if someone can help me out with this.  
I am trying to import Pandas in a Splunk app that I am developing. The app also contains fpdf to create a pdf and insert the images that are produced using Pandas.  2 of the 3 solutions that I tried ... See more...
I am trying to import Pandas in a Splunk app that I am developing. The app also contains fpdf to create a pdf and insert the images that are produced using Pandas.  2 of the 3 solutions that I tried produce an error as soon as I put 'import pandas'. 1. Solution 1: importing the Pandas module from a copy of the module that is located in the App's bin directory. The error log says: File "../splunk/etc/apps/<Appname>/bin/pandas/__init__.py", line 37, in <module> f"C extension: {module} not built. If you want to import " ImportError: C extension: No module named 'pandas._libs.tslibs.conversion' not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first. 2. Solution 2: Just import the splunk lib from Splunk's Python (simply by putting 'import pandas' at the top of the code). The error log says: Traceback (most recent call last): File "/splunk/etc/apps/<Appname>/bin/program.py", line 22, in <module> import pandas File "/splunk/lib/python3.7/site-packages/pandas/__init__.py", line 22, in <module> from pandas.compat import ( File "/splunk/lib/python3.7/site-packages/pandas/compat/__init__.py", line 14, in <module> from pandas._typing import F File "/splunk/lib/python3.7/site-packages/pandas/_typing.py", line 12, in <module> from mmap import mmap ModuleNotFoundError: No module named 'mmap' For the third and final attempt to make it work, I tried using a solution that I found in Splunk's forum, which is by using the app "Python for Scientific Computing". The steps that I have applied were: 1. Coping exec_anaconda.py to my app's bin. 2. Putting the following code at the very top of my script: #!/usr/bin/python import exec_anaconda exec_anaconda.exec_anaconda() # Put the rest of your imports below, e.g.: import numpy as np import pandas What happens with this solution is that I get no more error on the 'import pandas', but I get errors from most of the other modules that I am working with (fpdf, pip and so on). Does anybody know how to import pandas without getting these kind of errors? Thank you very much.