All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't... See more...
Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't find any APP available for Citrix NetScaler, which leverages the logs collected by the add-on and show in visualizations. Any suggestions please. thanks.    
It would be easier if you could influence limits.conf.  Is this a limitation in Splunk Cloud? Here is how I assess the situation. If you need to use join, your method is semantically equivalent to... See more...
It would be easier if you could influence limits.conf.  Is this a limitation in Splunk Cloud? Here is how I assess the situation. If you need to use join, your method is semantically equivalent to using one index search on the right. Append + stats may or may not be faster; your search also has some inefficiencies.  But as long as current performance is acceptable, you don't have to worry. It is more profitable to investigate why your right set is greater than 50K. You have to remember: any solution has to be evaluated in the specific dataset and specific use case.  There is no universal formula.  On the last point, you need to carefully review your indexed data.  There are several ways to reduce the amount of rows in the right. First of all, do you really have more than 50K unique combinations of ip, risk, score, contact?  I ask because you really only care about these fields.  My speculation is that you have many rows with identical combinations.  This is a simple test for you: index=risk | stats count by ip risk score contact | stats count  Is this count really greater than 50K?  If not, this would be easier to maintain and possibly more efficient: | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk | fields ip risk score contact | dedup ip risk score contact ] | table ip host risk score contact Another possible cause that you have too many rows in the right could be that there are too many ip's in the index that are missing from the lookup.  If that is the case, you can further restrict the index search by the lookup, i.e., | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk [ inputlookup host.csv | rename ip_address as ip | fields ip ] | fields ip risk score contact | dedup ip risk score contact ] | table ip host risk score contact These could be other ways to reduce amount of data.  They all depends on your dataset and use case.
Hi @Ryan.Paredez  - Any idea on this  ?  Thanks
Hi @yuanliu, I appreciate your help.  I accepted your solution. I tried your solution and it worked, however the subsearch index hit 50k max rows, so I split the join based on subnets. Is this t... See more...
Hi @yuanliu, I appreciate your help.  I accepted your solution. I tried your solution and it worked, however the subsearch index hit 50k max rows, so I split the join based on subnets. Is this the right way to do it?    See below I split into 3 join, each join has a filter based on subnets, but the issue is I don't know if the subnets will hit max 50k rows in the future, and I will have to manually adjust. Do you have any other suggestion ? Thanks again | inputlookup host.csv | rename ip_address as ip | eval source="csv" | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.1.0.0/16" | eval source="risk1" ] | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.2.0.0/16" | eval source="risk2" ] | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.3.0.0/16" | eval source="risk3" ] | table ip, host, risk, score, contact
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '... See more...
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1143)'))), Anyone experienced this, or know how to solve this certificate issue? FYI, I have updated ssl certificate on both Splunk and Dynatrace, but it didn’t help.
Truncated Output(The message was too long): C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        [SSL] C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf   ... See more...
Truncated Output(The message was too long): C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        [SSL] C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        _rcvbuf = 1572864 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        allowSslRenegotiation = true C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        certLogMaxCacheEntries = 10000 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        certLogRepeatFrequency = 1d C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        ecdhCurves = prime256v1, secp384r1, secp521r1 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          host = <Full Computer Name> C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        index = default C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        logCertificateData = true C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        sslQuietShutdown = false C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        sslVersions = tls1.2 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   [WinEventLog://Application] C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> index = default C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   start_from = oldest C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          [WinEventLog://DNS Server] C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          index = main C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          start_from = oldest C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          [WinEventLog://Directory Service] C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          index = main C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          start_from = oldest C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   [WinEventLog://ForwardedEvents] C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   current_only = 0 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> index = default C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   start_from = oldest C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   [WinEventLog://Security] C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> index = default
First, thanks for clearly illustrating raw input, desired output, and the logic to get from there.  Transaction is still the easiest way to go.  You just need to keep track of which value is which ev... See more...
First, thanks for clearly illustrating raw input, desired output, and the logic to get from there.  Transaction is still the easiest way to go.  You just need to keep track of which value is which eventtype. Many people here are familiar with the traditional technique of using string concatenation.  I will show a more semantic approach afforded by JSON functions introduced in 8.1.   | rename _raw as temp ``` only if you want to preserve _raw for later ``` | tojson eventtype, field1 | transaction startswith="eventtype=get" endswith="eventtype=update" | eval _raw = split(_raw, " ") | eval Before = json_extract(mvindex(_raw, 0), "field1"), After = json_extract(mvindex(_raw, 1), "field1") | rename temp as _raw ``` only if you want to preserve _raw for later ``` | fields Before, After   Note: The above is not completely semantic as I am also using the side effect of Splunk's default of lexical order. Here is an emulation for you to play with and compare with real data.   | makeresults format=csv data="_time, eventtype, sessionid, field1 10:06, update, session2, newvalue3 10:05, get, session2, newvalue2 09:15, update, session1, newvalue2 09:12, get, session1, newvalue1 09:10, get, session1, newvalue1 09:09, update, session1, newvalue1 09:02, get, session1, oldvalue1 09:01, get, session1, oldvalue1 08:59, get, session1, oldvalue1" | eval _time = strptime("2024-08-22T" . _time, "%FT%H:%M") ``` data emulation above ```   Output from the above search gives Before After _time newvalue2 newvalue3 2024-08-22 10:05:00 newvalue1 newvalue2 2024-08-22 09:12:00 oldvalue1 newvalue1 2024-08-22 09:02:00
../python3.9/site-packages/urllib3/connectionpool.py:1099: InsecureRequestWarning: Unverified HTTPS request is being made to host ''. Adding certificate verification is strongly advised. See: https:/... See more...
../python3.9/site-packages/urllib3/connectionpool.py:1099: InsecureRequestWarning: Unverified HTTPS request is being made to host ''. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings   warnings.warn( Response Code: 401 Response Text: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> This is the error from splunk that I got using the token that I made. Its 100% correct token from splunk. Is this because of the ssl ?
Hi, I support two customers who are both running Splunk on Windows and after upgrades this year are experiencing very similar problems.  I use this to monitoring the swap memory usage: index=_int... See more...
Hi, I support two customers who are both running Splunk on Windows and after upgrades this year are experiencing very similar problems.  I use this to monitoring the swap memory usage: index=_introspection swap component=Hostwide | timechart avg(data.swap_used) span=1h and as it increases we then start seeing dumps which I can also graph with this: index=_internal sourcetype=splunkd_crash_log "Crash dump written to:" | timechart count Like you this has been logged with Splunk for some time but no fix yet - though they did just say there is an internal case looking into it. For my customers the problem builds up more slowly so as long as they restart Splunk twice a week they have no problems.  Sounds like that wont help you.   Its nice to know there are others with the same issue.  Thanks for all your detail especially re rammap.exe. Good Luck
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, fi... See more...
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, files from this input are NOT monitored if their path matches the specified regex. * Takes precedence over the deprecated '_blacklist' setting, which functions the same way. * If a file matches the regexes in both the deny list and allow list settings, the file is NOT monitored. Deny lists take precedence over allow lists. * No default. [monitor:///logs/incoming/file.com/all-messages.log] sourcetype = something index = something_platform disabled = 0 blacklist = audit.log
You are correct, @gcusello , I wouldn't normally. The only reason I was listing it was because the error I was getting at start up about it being incorrect, so I was trying to override it. As it tur... See more...
You are correct, @gcusello , I wouldn't normally. The only reason I was listing it was because the error I was getting at start up about it being incorrect, so I was trying to override it. As it turns out, the error came from when I fixed it on the distributed indexes.conf file, it didn't actually SAVE the correction, so that peer-app kept overriding everything else I did with the wrong configuration. and it took me several hours of staring at the error before I actually saw it.  So it was a carbon based error.
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like ... See more...
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like /var/log/message; /var/log/secure; /var/log/audit/audit.log; /var/log/cron etc. Recently, I noticed that only logs from /var/log/messages and /var/log/cron being ingested; specially I don't see /var/log/secure and /var/log/audit/audit.log.  I tried restarting splunk process on one of the UF and check splunkd.log and I don’t see any errors. Here is what I see for /var/log/secure in the splunkd.log (looks normal) (I have typed it, as I can copy/paste from the air gapped machine) TailingProcessor [xxxxxx MainTailingThread] passing configuration stanza: monitor:///var/log/secure TailingProcessor [xxxxxx MainTailingThread] Adding watch on path:///var/log/secure WatchedFile [xxxxxx tailreader 0] – Will begin reading at offset=xxxx for file=`/var/log/secure` Here is my inputs.conf [default] host = <indexer> index = linux [monitor:///var/log/secure] disabled = false [monitor:///var/log/messages] disabled = false [monitor:///var/log/audit/audit.log] disabled = false [monitor:///var/log/syslog] disabled = false   File permission seems to be fine for all of those files. Please note, SELinux is enabled but file permission seems to be fine for all of those files. Initially, I did have to run "setfacl -R -m u:splunkfwd:rX /var/log"  for Splunk to get access access to send logs to the indexer.  btool also shown that I am using the correct inputs.conf. Any idea, what's not misconfigured?  
Hello! Checking in August 22, 2024 -- still not able to edit permissions on multiple objects at once. 
Hey, thanks for taking the time to reply, bwheel, but I think you might have misread my post. I stated that I was clicking the "Rebuild forwarder Assets..." button. I'm not sure what you're referring... See more...
Hey, thanks for taking the time to reply, bwheel, but I think you might have misread my post. I stated that I was clicking the "Rebuild forwarder Assets..." button. I'm not sure what you're referring to with the "regular 'update'" you mention. I also couldn't find any mention of an "update" option in the document you linked. Maybe I'm misunderstanding what you're saying, but either way please don't spend any further time on it. I opened a support case about the fact it didn't work, and they said it was a bug and provided me with a search to update the lookup table manually. I think they might have fixed it at this point. I seem to recall using it not too long ago.
I'm using the punchcard in dashboard studio and the values on the left are getting truncated with ellipses, is there a way to display the full value or edit the truncation style?  
Hi @kareem , could you better describe your issue? and share some sample of your data? Ciao. Giuseppe
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the cur... See more...
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the current value of field1. This can occur several times as the user refreshes the page, or through code behind the scenes that generates an event based on how long the user stays on the page. 2. The user fills in the form and hits submit, which logs an event with "update" eventtype. Here's a simplified list of events: _time,         eventtype,          sessionid,         field1 10:06         update                  session2           newvalue3 10:05         get                          session2           newvalue2 09:15         update                  session1           newvalue2 09:12         get                          session1           newvalue1 09:10         get                          session1           newvalue1 09:09         update                  session1           newvalue1 09:02         get                          session1           oldvalue1 09:01         get                          session1           oldvalue1 08:59         get                          session1           oldvalue1 I'm looking to get the last value of field1 before each "update" eventtype. Basically I'd like to track what the value was before and what it was changed to, something like: _time,              Before,                      After 10:06               newvalue2              newvalue3 09:15               newvalue1              newvalue2 09:09               oldvalue1                newvalue1 I've tried this with a transaction command on the session, but I run into issues with the multiple instances "get" events in the same session, which makes it a little convoluted to extract the running values of field1.  I also tried this with a combination of the latest(field1) and earliest(field1), but then this misses any updates that might take place within the session - we sometimes have users who change the value and then change it back. I'd like to capture those events as well.   Does anyone have any tips on how to get this accomplished? Thanks!
Splunk does work with ELB. Checkout  https://community.splunk.com/t5/Getting-Data-In/Does-external-load-balancer-works-with-Universal-Heavy-forwarder/m-p/532727
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | ... See more...
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | table ... | collect ... I ran this using Python SDK in VSCode as - oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) conn.cursor().execute(sql, val) I ran the above using psycopg2 and got this error- FATAL: Error in 'lookup' command: Could not construct lookup 'Map.csv, B, OUTPUT, D'. See search.log for more details. The above query works when run inside splunk enterprise i.e. map.csv is looked-up and result fetched correctly. How do I locate my search.log? It is  splunkhome/var/lib/dispatch/run I assume. What is the error above? Thanks
Hi guys  when I extract a selected event it doesn't show all data in event that I need to extracted