All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  Can anyone help me out with deployment server and UF architecture.   In our environment we are facing issues pertaining to deployment server to Uf connection. for some DS - UF the conne... See more...
Hello,  Can anyone help me out with deployment server and UF architecture.   In our environment we are facing issues pertaining to deployment server to Uf connection. for some DS - UF the connection is voluntarily being disconnected. What could be the issue?   Thank you for the time.
Initially we were using splunk enterprise to log our real time logs. But few days before we have moved onto splunk cloud for logging. And also have migrated all the alerts and dashboards from splun... See more...
Initially we were using splunk enterprise to log our real time logs. But few days before we have moved onto splunk cloud for logging. And also have migrated all the alerts and dashboards from splunk enterprise to splunk cloud. Now we are observing that there is a lag in logs sent to splunk cloud. As logs are getting delayed for few minutes on splunk cloud compared to splunk enterprise. Need to understand what's the reason can someone please guide
I am looking to monitor permissions for anything that has the everyone permission that is created by a normal user with the following query: | rest /servicesNS/-/search/directory | search author!=... See more...
I am looking to monitor permissions for anything that has the everyone permission that is created by a normal user with the following query: | rest /servicesNS/-/search/directory | search author!=nobody AND author!=admin | fields title, eai:type, author, disabled, eai:acl.perms.read, eai:acl.perms.write | rename eai:* AS *, acl.* AS * | where 'perms.write'="*" OR 'perms.read'="*" | search NOT type IN (views, commands, macros, *lookup*, *extract*, fvtags, fieldaliases, nav, eventtypes) Which is fine but what I really want to do is then have it change those everyone permissions to our base user (which is the inheritance point for all other groups and all users are put in it).  So I want to make it as a self remediating alert.   Thing is I can't find anything on changing permissions with the splunk query language and being a cloud instance I don't have access to the cli.
in classic dashboards - I could schedule a pdf of a dashboard to be sent to me daily but I do not see that option in dashboard studio? where is that option??   Rich
I am trying to use Splunk to review windows logs that were exported from machines that are not on a network.  I have copied the .evtx files to my Splunk machine. Fresh install of Splunk 9.0.1 Bel... See more...
I am trying to use Splunk to review windows logs that were exported from machines that are not on a network.  I have copied the .evtx files to my Splunk machine. Fresh install of Splunk 9.0.1 Below is the process I used to try to get the events indexed.  I think this is the same process I have used in the past but for some reason no events are indexed.    1. Settings --> New Index 2. Enter Name for index 3. Save 4. Settings --> Data Inputs 5. Files and Directories 6. New Local File and Directory 7. Input Path of top level folder containing logs 8. Select Continuously Monitor 9. Next 10. Source Type: Automatic 11. App Context: Search and reporting 12. Select Constant Value 13. Host field name:* 14. Select Index Created in steps 1- 3 15. Review 16. Start Searching   The search results in 0 events listed.  I delete everything from the search box except index="nameofindex" and still there are no events listed.
Hi, we would like to monitor authentication attempts in our SMTP server (Exchange 2016) but I could not find a way to do so. We already have IT Essentials with the Exchange Content Pack but SMTP ... See more...
Hi, we would like to monitor authentication attempts in our SMTP server (Exchange 2016) but I could not find a way to do so. We already have IT Essentials with the Exchange Content Pack but SMTP does not seem to be covered. Then I took a look into the "Splunk Add-on for Microsoft Exchange" which contains the TA-Exchange-HubTransport which monitors the following path but I checked those logs on my server and they did not contain any authentication event.  [installation drive] :\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking Have anyone managed to find those events and ingest them in Splunk somehow? Our final goal is to find public IPs trying to brute force our SMTP server. Thanks a lot.
Greetings all,  Regarding server classes and clients. If I remove conditions from the includelist of a Server Class, will that also automatically remove the clients matching those conditions from... See more...
Greetings all,  Regarding server classes and clients. If I remove conditions from the includelist of a Server Class, will that also automatically remove the clients matching those conditions from the Server Class or will I still need to manually remove those clients ? Thanks !
Hello,  I have installed a Eventgen App from Splunk base to our Heavy forwarder. The following are the details SA-EVENTGEN Version :- 7.2.1 Splunk Version :- 8.1.1 After installation. I am se... See more...
Hello,  I have installed a Eventgen App from Splunk base to our Heavy forwarder. The following are the details SA-EVENTGEN Version :- 7.2.1 Splunk Version :- 8.1.1 After installation. I am seeing the follow error messages related to the eventgen   ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" SyntaxError: Missing parentheses in call to 'print'. Did you mean print('\n\nCaught kill, exiting...')? ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" print '\n\nCaught kill, exiting...' ERROR ExecProcessor - message from "/cs/splunk/forwarder/bin/python3.7 /cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py" File "/cs/splunk/forwarder/etc/apps/SA-Eventgen/bin/eventgen.py", line 113   Does anyone face the similar issue with SA-Eventgen? Looks like the Python version in Eventgen is no compatible or having some issues.  Any idea on how to troubleshoot or resolve these issues?
The app is on splunkbase published by splunklabs/splunkworks. Splunk Add-on for Microsoft Office 365 Reporting Web Service | Splunkbase The app was recently updated to use Advanced auth  but it req... See more...
The app is on splunkbase published by splunklabs/splunkworks. Splunk Add-on for Microsoft Office 365 Reporting Web Service | Splunkbase The app was recently updated to use Advanced auth  but it requires  GLOBAL READER role which is really a big deal for us, also it doesn't seem appropriate  for the purpose we are using the app for(just reading web mails), is there any way we could make it work with less privilege ? The error i receive on the HF where i have installed this app - 09-19-2022 09:20:40.397 -0500 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py" 403 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2022-09-14T09:20:39.496008Z'%20and%20EndDate%20eq%20datetime'2022-09-14T10:20:39.496008Z It would be really appreciated if someone can help me out with this.  
I am trying to import Pandas in a Splunk app that I am developing. The app also contains fpdf to create a pdf and insert the images that are produced using Pandas.  2 of the 3 solutions that I tried ... See more...
I am trying to import Pandas in a Splunk app that I am developing. The app also contains fpdf to create a pdf and insert the images that are produced using Pandas.  2 of the 3 solutions that I tried produce an error as soon as I put 'import pandas'. 1. Solution 1: importing the Pandas module from a copy of the module that is located in the App's bin directory. The error log says: File "../splunk/etc/apps/<Appname>/bin/pandas/__init__.py", line 37, in <module> f"C extension: {module} not built. If you want to import " ImportError: C extension: No module named 'pandas._libs.tslibs.conversion' not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first. 2. Solution 2: Just import the splunk lib from Splunk's Python (simply by putting 'import pandas' at the top of the code). The error log says: Traceback (most recent call last): File "/splunk/etc/apps/<Appname>/bin/program.py", line 22, in <module> import pandas File "/splunk/lib/python3.7/site-packages/pandas/__init__.py", line 22, in <module> from pandas.compat import ( File "/splunk/lib/python3.7/site-packages/pandas/compat/__init__.py", line 14, in <module> from pandas._typing import F File "/splunk/lib/python3.7/site-packages/pandas/_typing.py", line 12, in <module> from mmap import mmap ModuleNotFoundError: No module named 'mmap' For the third and final attempt to make it work, I tried using a solution that I found in Splunk's forum, which is by using the app "Python for Scientific Computing". The steps that I have applied were: 1. Coping exec_anaconda.py to my app's bin. 2. Putting the following code at the very top of my script: #!/usr/bin/python import exec_anaconda exec_anaconda.exec_anaconda() # Put the rest of your imports below, e.g.: import numpy as np import pandas What happens with this solution is that I get no more error on the 'import pandas', but I get errors from most of the other modules that I am working with (fpdf, pip and so on). Does anybody know how to import pandas without getting these kind of errors? Thank you very much.
Hello I have a query that running a rest command, one of the fields is "action.email.to" also i have a lookup table with action.email.to list and team name for east email in the list I want to co... See more...
Hello I have a query that running a rest command, one of the fields is "action.email.to" also i have a lookup table with action.email.to list and team name for east email in the list I want to compare the action.email.to from the query with the one from the lookup and add another column with the team name. I tried with append but the team name column is empty   this is my query :   |rest /servicesNS/admin/search/alerts/fired_alerts/- |fields eai:acl.owner savedsearch_name triggered_alert_count | join savedsearch_name [| rest splunk_server=local count=0 /services/saved/searches | rename title as savedsearch_name |append [inputlookup mailingList.csv ] | table action.email.to savedsearch_name teamName]  
Hello, is there a way to get older forwarder versions than what are available here: Splunk Universal Forwarder Previous Releases | Splunk   I need a forwarder that works with Windows XP, but I... See more...
Hello, is there a way to get older forwarder versions than what are available here: Splunk Universal Forwarder Previous Releases | Splunk   I need a forwarder that works with Windows XP, but I can't find an installer for a version that old. Also, before anyone asks, yes I know I shouldn't have Windows XP, but it is a disconnected environment and is a requirement by the architecture. I can't change it or remove it, so please don't bother suggesting that. I need a forwarder that can work with XP (and also 2000 and 98se if possible but I doubt those ever existed).
Hello All I got a requirement to Upload Logs to Splunk Out of 5 Hosts 3 are Linux and other 2 are windows The Logs getting picked by Splunk but for Linux but for windows Unable to see Logs from... See more...
Hello All I got a requirement to Upload Logs to Splunk Out of 5 Hosts 3 are Linux and other 2 are windows The Logs getting picked by Splunk but for Linux but for windows Unable to see Logs from 1st to 9th of every month The timestamp for windows server is 1/09/2022, 2/09/2022 and so on till 9/09/2022 %d in props need date to be two digit but here it is just one digit hence the logs are not getting picked   so tried below props for windows with %e: TIME_FORMAT=%e/%m/%Y %H:%M:%S We apply props using sourcetype in general as it was not working tried props with source but still same issue By the way in our infrastructure props are kept in heavy forwarder and are applied at index time Can anyone help on this please?
Hello, I currently have a field that contains a long string over 100+ events and in that field there are varying file sizes (609.9 KB, 1GB, 300B, etc.). What I would like to do is come up with an e... See more...
Hello, I currently have a field that contains a long string over 100+ events and in that field there are varying file sizes (609.9 KB, 1GB, 300B, etc.). What I would like to do is come up with an eval or rex command that will pull out only the file sizes and place them into a new field called File_Size. I tried using the field extractor via the GUI, but it would only pick up a couple of the file sizes and even then still wouldn’t show up in the available fields of my search. I’ve also tried creating different queries, but can’t seem to get capture groups set correctly. Can someone please advise on best approach?
Hi all,  I've just configured a receiver for Apache ActiveMQ, but noticed in journal logs that it takes more than 10s to retrieve logs from the activeMQ endpoint. There is no collection interval in ... See more...
Hi all,  I've just configured a receiver for Apache ActiveMQ, but noticed in journal logs that it takes more than 10s to retrieve logs from the activeMQ endpoint. There is no collection interval in parameter stated in the documentation page of ActiveMQ (https://github.com/signalfx/signalfx-agent/blob/main/docs/monitors/collectd-activemq.md), nor the the genericjmx documentation (https://docs.splunk.com/Observability/gdi/genericjmx/genericjmx.html).  May I know if there is any way to set a custom collection interval for the collectd receiver, or any other receiver, for that matter?
Hello, I found a way to change the line brush to draw dotted lines in line charts on XML dashboards, but not on Dashboard Studio. Is it possible? If yes, is it possible to have multiple types of li... See more...
Hello, I found a way to change the line brush to draw dotted lines in line charts on XML dashboards, but not on Dashboard Studio. Is it possible? If yes, is it possible to have multiple types of lines in one chart (some lines solid, other dotted) ? Thanks in advance
We have a sample local ".txt" file to analyse some logs stored locally in the Heavy Forwarder, in its /tmp/ folder. For this purpose, a sourcetype has been configured in the Heavy Forwarder to parse... See more...
We have a sample local ".txt" file to analyse some logs stored locally in the Heavy Forwarder, in its /tmp/ folder. For this purpose, a sourcetype has been configured in the Heavy Forwarder to parse the log as we wish. All this was set up from the web interface. Back in the day, we were wrong and created the index in the Heavy Forwarder to assign it from the "Input Settings" of the "Add Data" menu. But then we discovered that this should not be done this way. So, we created the index with name "test" in our cluster master and it was replicated correctly to the two peers indexers. The index is now created but with no information. And it does not appear in the Search Head. Unfortunately, when assigning the index where it should be saved from the menu Add Data of the Heavy Forwarder, the index "test" that is created in the Indexers does not appear. In addition, even when the index was created in both indexers and Heavy Forwaders the events wouldn't get to the indexers after selecting the "test" index. In that case it did appear on HF menu I guess because it was created locally there.  
Hi,   I integrated Trendmicro DDI with Splunk using the app. But in DDI, there is a gap in the signature name. Therefore when Splunk is parsing the signature name, it is only showing the first wo... See more...
Hi,   I integrated Trendmicro DDI with Splunk using the app. But in DDI, there is a gap in the signature name. Therefore when Splunk is parsing the signature name, it is only showing the first word and not the rest.  For example if the signature name is "possible scanning activity" , I could see only in Splunk that the signature nae is "Possible" . The remaining is not coming up. Can some one please help with this. This is something very urgent. 
Hello, I have a search that outputs table data that looks like this:       hst code type hosta 01 master hosta 02 master hostb 01 host hostb 03 host hostc 02 host hostd 04 host hoste 05 mas... See more...
Hello, I have a search that outputs table data that looks like this:       hst code type hosta 01 master hosta 02 master hostb 01 host hostb 03 host hostc 02 host hostd 04 host hoste 05 master hoste 06 master hostf 06 host hostg 08 host     etc.etc... I am trying to filter events but i am unable to do. My goal is to filter events based on this condition: If the code on a master also exist on the host, then the host rows should be removed So, my desired output should look like this:     hst code type hosta 01 master hosta 02 master hostb 03 host hostd 04 host hoste 05 master hoste 06 master hostg 08 host     I hope someone can help me. Thanks in advance. Regards, Harry
Hi,   I am trying to mask dataat index time, can you please help ? First line is a result and second is what i would like to be. Thx   "authenticationValue":"AAcBBGJxFAAAAZZANIJZdQAAAAA="... See more...
Hi,   I am trying to mask dataat index time, can you please help ? First line is a result and second is what i would like to be. Thx   "authenticationValue":"AAcBBGJxFAAAAZZANIJZdQAAAAA=" Result  "authenticationValue":"****************************"