All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

../python3.9/site-packages/urllib3/connectionpool.py:1099: InsecureRequestWarning: Unverified HTTPS request is being made to host ''. Adding certificate verification is strongly advised. See: https:/... See more...
../python3.9/site-packages/urllib3/connectionpool.py:1099: InsecureRequestWarning: Unverified HTTPS request is being made to host ''. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings   warnings.warn( Response Code: 401 Response Text: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> This is the error from splunk that I got using the token that I made. Its 100% correct token from splunk. Is this because of the ssl ?
Hi, I support two customers who are both running Splunk on Windows and after upgrades this year are experiencing very similar problems.  I use this to monitoring the swap memory usage: index=_int... See more...
Hi, I support two customers who are both running Splunk on Windows and after upgrades this year are experiencing very similar problems.  I use this to monitoring the swap memory usage: index=_introspection swap component=Hostwide | timechart avg(data.swap_used) span=1h and as it increases we then start seeing dumps which I can also graph with this: index=_internal sourcetype=splunkd_crash_log "Crash dump written to:" | timechart count Like you this has been logged with Splunk for some time but no fix yet - though they did just say there is an internal case looking into it. For my customers the problem builds up more slowly so as long as they restart Splunk twice a week they have no problems.  Sounds like that wont help you.   Its nice to know there are others with the same issue.  Thanks for all your detail especially re rammap.exe. Good Luck
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, fi... See more...
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, files from this input are NOT monitored if their path matches the specified regex. * Takes precedence over the deprecated '_blacklist' setting, which functions the same way. * If a file matches the regexes in both the deny list and allow list settings, the file is NOT monitored. Deny lists take precedence over allow lists. * No default. [monitor:///logs/incoming/file.com/all-messages.log] sourcetype = something index = something_platform disabled = 0 blacklist = audit.log
You are correct, @gcusello , I wouldn't normally. The only reason I was listing it was because the error I was getting at start up about it being incorrect, so I was trying to override it. As it tur... See more...
You are correct, @gcusello , I wouldn't normally. The only reason I was listing it was because the error I was getting at start up about it being incorrect, so I was trying to override it. As it turns out, the error came from when I fixed it on the distributed indexes.conf file, it didn't actually SAVE the correction, so that peer-app kept overriding everything else I did with the wrong configuration. and it took me several hours of staring at the error before I actually saw it.  So it was a carbon based error.
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like ... See more...
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like /var/log/message; /var/log/secure; /var/log/audit/audit.log; /var/log/cron etc. Recently, I noticed that only logs from /var/log/messages and /var/log/cron being ingested; specially I don't see /var/log/secure and /var/log/audit/audit.log.  I tried restarting splunk process on one of the UF and check splunkd.log and I don’t see any errors. Here is what I see for /var/log/secure in the splunkd.log (looks normal) (I have typed it, as I can copy/paste from the air gapped machine) TailingProcessor [xxxxxx MainTailingThread] passing configuration stanza: monitor:///var/log/secure TailingProcessor [xxxxxx MainTailingThread] Adding watch on path:///var/log/secure WatchedFile [xxxxxx tailreader 0] – Will begin reading at offset=xxxx for file=`/var/log/secure` Here is my inputs.conf [default] host = <indexer> index = linux [monitor:///var/log/secure] disabled = false [monitor:///var/log/messages] disabled = false [monitor:///var/log/audit/audit.log] disabled = false [monitor:///var/log/syslog] disabled = false   File permission seems to be fine for all of those files. Please note, SELinux is enabled but file permission seems to be fine for all of those files. Initially, I did have to run "setfacl -R -m u:splunkfwd:rX /var/log"  for Splunk to get access access to send logs to the indexer.  btool also shown that I am using the correct inputs.conf. Any idea, what's not misconfigured?  
Hello! Checking in August 22, 2024 -- still not able to edit permissions on multiple objects at once. 
Hey, thanks for taking the time to reply, bwheel, but I think you might have misread my post. I stated that I was clicking the "Rebuild forwarder Assets..." button. I'm not sure what you're referring... See more...
Hey, thanks for taking the time to reply, bwheel, but I think you might have misread my post. I stated that I was clicking the "Rebuild forwarder Assets..." button. I'm not sure what you're referring to with the "regular 'update'" you mention. I also couldn't find any mention of an "update" option in the document you linked. Maybe I'm misunderstanding what you're saying, but either way please don't spend any further time on it. I opened a support case about the fact it didn't work, and they said it was a bug and provided me with a search to update the lookup table manually. I think they might have fixed it at this point. I seem to recall using it not too long ago.
I'm using the punchcard in dashboard studio and the values on the left are getting truncated with ellipses, is there a way to display the full value or edit the truncation style?  
Hi @kareem , could you better describe your issue? and share some sample of your data? Ciao. Giuseppe
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the cur... See more...
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the current value of field1. This can occur several times as the user refreshes the page, or through code behind the scenes that generates an event based on how long the user stays on the page. 2. The user fills in the form and hits submit, which logs an event with "update" eventtype. Here's a simplified list of events: _time,         eventtype,          sessionid,         field1 10:06         update                  session2           newvalue3 10:05         get                          session2           newvalue2 09:15         update                  session1           newvalue2 09:12         get                          session1           newvalue1 09:10         get                          session1           newvalue1 09:09         update                  session1           newvalue1 09:02         get                          session1           oldvalue1 09:01         get                          session1           oldvalue1 08:59         get                          session1           oldvalue1 I'm looking to get the last value of field1 before each "update" eventtype. Basically I'd like to track what the value was before and what it was changed to, something like: _time,              Before,                      After 10:06               newvalue2              newvalue3 09:15               newvalue1              newvalue2 09:09               oldvalue1                newvalue1 I've tried this with a transaction command on the session, but I run into issues with the multiple instances "get" events in the same session, which makes it a little convoluted to extract the running values of field1.  I also tried this with a combination of the latest(field1) and earliest(field1), but then this misses any updates that might take place within the session - we sometimes have users who change the value and then change it back. I'd like to capture those events as well.   Does anyone have any tips on how to get this accomplished? Thanks!
Splunk does work with ELB. Checkout  https://community.splunk.com/t5/Getting-Data-In/Does-external-load-balancer-works-with-Universal-Heavy-forwarder/m-p/532727
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | ... See more...
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | table ... | collect ... I ran this using Python SDK in VSCode as - oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) conn.cursor().execute(sql, val) I ran the above using psycopg2 and got this error- FATAL: Error in 'lookup' command: Could not construct lookup 'Map.csv, B, OUTPUT, D'. See search.log for more details. The above query works when run inside splunk enterprise i.e. map.csv is looked-up and result fetched correctly. How do I locate my search.log? It is  splunkhome/var/lib/dispatch/run I assume. What is the error above? Thanks
Hi guys  when I extract a selected event it doesn't show all data in event that I need to extracted
Experiencing exact same issue with a similar configuration Infoblox Dataconnector -> HF -> Splunk Cloud (IN) ... Before I go banging my head against conf files, did you ever find a solution? Like yo... See more...
Experiencing exact same issue with a similar configuration Infoblox Dataconnector -> HF -> Splunk Cloud (IN) ... Before I go banging my head against conf files, did you ever find a solution? Like you mentioned, the data seems pre-cooked by the Infoblox "thing"
Issue got resolved, Thanks for the solution
There are 2 options. 1 is the regular 'update' which you have selected, and then an additional 'rebuild forwarder assets' button that will do a complete rebuild. They both use saved searches, one ... See more...
There are 2 options. 1 is the regular 'update' which you have selected, and then an additional 'rebuild forwarder assets' button that will do a complete rebuild. They both use saved searches, one is the update which is activated when you select 'Enable', and the other you can setup for a regular rebuild to clear out the older forwarders if you wish. Especially useful when you have AWS or similar that regularly redeploys environments. Some more detail available at: Use the Forwarder dashboards - Splunk Documentation
This can be caused by syslog not supporting newlines(\n). The following settings on the HF will improve this. props.conf [your-sourcetype] TRANSFORMS-◯◯ = transname transforms.conf [transna... See more...
This can be caused by syslog not supporting newlines(\n). The following settings on the HF will improve this. props.conf [your-sourcetype] TRANSFORMS-◯◯ = transname transforms.conf [transname] INGEST_EVAL = _raw=replace(_raw, "\n", " ")
Does the POST endpoint work for Splunk Cloud? The documentation linked is for Enterprise. The original question, and mine is regarding Splunk Cloud platform.  https://${STACK_NAME}.splunkcloud.com:8... See more...
Does the POST endpoint work for Splunk Cloud? The documentation linked is for Enterprise. The original question, and mine is regarding Splunk Cloud platform.  https://${STACK_NAME}.splunkcloud.com:8089/servicesNS/nobody/${APP_NAME}/properties/${CONF_NAME}/${STANZA} SCP has limitations on what admins can do via the API for modifying app configurations. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/RESTTUT/RESTandCloud 
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events relat... See more...
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events related to this sourcetype are ingested by a universal forwarder with the settings below: props.conf   [exch_file_trans-front-recv] ANNOTATE_PUNCT = false FIELD_HEADER_REGEX = ^#Fields:\s+(.*) SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = date_time BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 24 initCrcLength = 256 TRANSFORMS-no_column_headers = no_column_headers   transforms.conf   [no_column_headers] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue   In this sourcetype I have some events that I want to delete before indexing. You can see an example below:   2024-08-22T12:58:31.274Z,Sever01\Domain Infrastructure Sever01,08DCC212EB386972,6,172.25.57.26:25,172.21.255.8:29635,-,,Local   So, I'm interested in deleting events with the pattern '...172.21.225.8:....,'. To do it, I created some settings on the indexer cluster layer: props.conf   [exch_file_trans-front-recv] TRANSFORMS-remove_trash = exch_file_trans-front-recv_rt0   transforms.conf   [exch_file_trans-front-recv_rt0] REGEX = ^.*?,.*?,.*?,.*?,.*?,172.21.255.8:\d+, DEST_KEY = queue FORMAT = nullQueue   After applying this configuration across the indexer cluster, I still observe new events with the presented pattern. What am I doing wrong?
Hi @claudio_manig , I am trying to do as you wrote on the outputs.conf but it still has header problems. Can you provide me a practical example please? Thank you so much for your kindness and helpf... See more...
Hi @claudio_manig , I am trying to do as you wrote on the outputs.conf but it still has header problems. Can you provide me a practical example please? Thank you so much for your kindness and helpfulness, Giulia