All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I have a search index="xyz" sourcetype="csv" | fillnull value="unknownMan" field1 field2 field3 field4 | eventstats dc(field1) as xyz by field2 field3 field4 | table field1 field2 field3 field... See more...
  I have a search index="xyz" sourcetype="csv" | fillnull value="unknownMan" field1 field2 field3 field4 | eventstats dc(field1) as xyz by field2 field3 field4 | table field1 field2 field3 field4 while running this, i'm getting NULL values in the results? Please help me with this why NULL values will be coming when there is no NULL values in the events??
how to set an alert running every day hourly? ex - if new transactions /events occur alert the user
Hi Everyone, We need a PAM server logs without installing any third-party app in Pam server. Is it possible to do the monitoring without installing the Third-party app ??   Regards, Jack 
Please let me know the correlation search query and time range conditions for two of these usecases. I have windows powershell logs onboarded.   1. Suspicious Windows Shell Launched by Web Applic... See more...
Please let me know the correlation search query and time range conditions for two of these usecases. I have windows powershell logs onboarded.   1. Suspicious Windows Shell Launched by Web Applications  2. Suspicious Windows Shell Launched by a trusted process
I have a flat file that is in JSON format where events have no date/time as follows:     {"device": "info.gw.xyz.com", "ip": "x.x.x.x", "age": "0", "mac": "Incomplete", "interface": " "}, {"device... See more...
I have a flat file that is in JSON format where events have no date/time as follows:     {"device": "info.gw.xyz.com", "ip": "x.x.x.x", "age": "0", "mac": "Incomplete", "interface": " "}, {"device": "info.gw.xyz.com", "ip": "x.x.x.x", "age": "-", "mac": "0000.0000.0000", "interface": "Vlan673"}     My props.conf file is as follows:     [my_arp] INDEXED_EXTRACTIONS = JSON TZ=UTC     Problem is when I search events, they are four hours in the future. The files are on a sever that has the UF and that has the correct time set so looking through the Splunk docs  (https://docs.splunk.com/Documentation/Splunk/9.0.1/Data/HowSplunkextractstimestamps) I see this: If no events in the source have a date, Splunk software tries to find a date in the source name or file name. The events must have a time, even if they don't have a date. The files do have a date and time How do I fix this? Thx
Hi, I`ve got the following search that I would like to amend as follows: 1. swipe_in and swipe_out times to show on the same row for each "transaction" (in and out being considered a transaction)... See more...
Hi, I`ve got the following search that I would like to amend as follows: 1. swipe_in and swipe_out times to show on the same row for each "transaction" (in and out being considered a transaction). 2. only show the duration for swipe_in and swipe_out and not for swipe_out-swipe_in. Essentially my table should display: swipe_in times, swipe out times and duration. Thank you in advance. Search details: | eval location_desc=if(match(location_desc,"OUT"), "swipe_out", "swipe_in") | sort _time | streamstats window=2 current=f first(_time) as previous_swipe | eval duration=round((_time-previous_swipe)/3600, 2) | table location_desc, _time, duration
Hello there, Here is the context, I have a Splunk test environment, one indexer one search head and one forwarder. I'm in charge of finding a way to guarantee the integrity of the events available ... See more...
Hello there, Here is the context, I have a Splunk test environment, one indexer one search head and one forwarder. I'm in charge of finding a way to guarantee the integrity of the events available on the search head. My first question is, how to test data integrity control? I implemented it based on Splunk documentation, I tried to run Splunk clean and use the delete command (now I know that the event is not deleted from the index using delete),  and I edited the log files. But the integrity check is always successful. In an other words, in what case does the integrity check becomes unsuccessful?  My second question is, I changed the auth.log file, I mean this can be super dangerous but Splunk just displays both events, before the edit and after the edit. How can I use Splunk to detect such changes? Any help would be appreciated, thank you so much for your time 
hi experts trying to deploy a HF and forward logs to 2 different indexers. clone data i have 2 UFs feeding windows and syslog logs respectively to a HF. This is my HF output conf, i think there ... See more...
hi experts trying to deploy a HF and forward logs to 2 different indexers. clone data i have 2 UFs feeding windows and syslog logs respectively to a HF. This is my HF output conf, i think there some thing wrong here as i can only see logs at my indexer1 [tcpout] defaultGroup=windows,syslog [tcpout:windows,syslog] server=indexer1 ip:9997 [tcpout:windows,syslog] server=indexer2 ip:9997 appreciate any help.    
How to move Splunk cloud archives to Azure blob storage as contract with Splunk Cloud is getting terminated and we want to move it to Sentinel as part of this. Any suggestions ?
I'm trying to install Splunk SOAR in and EC2 Linux machine (8vCPU and 16GB RAM). I used this link https://docs.splunk.com/Documentation/SOARonprem/5.3.5/Install/InstallRPM. On running sudo ./soar-ins... See more...
I'm trying to install Splunk SOAR in and EC2 Linux machine (8vCPU and 16GB RAM). I used this link https://docs.splunk.com/Documentation/SOARonprem/5.3.5/Install/InstallRPM. On running sudo ./soar-install i'm getting errors. Trying this setup for test purpose only. The storage I have added is less than 500GB.    Traceback (most recent call last): File "/opt/phantom/5.3.4/splunk-soar/./soar-install", line 85, in main deployment.run() File "/opt/phantom/5.3.4/splunk-soar/install/deployments/deployment.py", line 130, in run self.run_pre_deploy() File "/opt/phantom/5.3.4/splunk-soar/usr/python39/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/phantom/5.3.4/splunk-soar/install/deployments/deployment.py", line 163, in run_pre_deploy raise InstallError( install.install_common.InstallError: pre-deploy checks failed. Warnings can be ignored with --ignore-warnings install failed.   Anyone faced any issues similar to this ?  
Hi, I have installed Splunk DB Connect App 3.6.0 version on Splunk 8.0.5 Platform. I keep getting this error 'Cannot communicate with task server, please check your settings.' When I started to... See more...
Hi, I have installed Splunk DB Connect App 3.6.0 version on Splunk 8.0.5 Platform. I keep getting this error 'Cannot communicate with task server, please check your settings.' When I started to restart task server i get 'Failed to restart'. So far I have checked the following. Confirmed the java path file has correct info(/opt/app/jdk1.8.0_51) Confirmed that port 9998 and 9999 are not currently used Confirmed ports are no blocked by any firewall I have tried using other ports instead of 9998 like 1025 but still the error exists Restarted Splunk Server couple of time Also,I see the following error when I run 'index=_internal sourcetype="dbx*" *ERROR*' [ERROR] [settings.py], line 133: unable to update java path file [/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/linux_x86/bin/customized.java.path] [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most recent call last): File "/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 76, in handle_POST self.validate_java_home(payload["javaHome"]) File "/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 215, in validate_java_home is_valid, reason = validateJRE(java_cmd) File "/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/jre_validator.py", line 73, in validateJRE output = output.decode('utf-8') AttributeError: 'str' object has no attribute 'decode' Any solution to resolve this would be of great help. Thanks!
Hi I am building a Node project that includes appdynamics. It's built via Yarn and it fails at the appdynamics-native step: Exit code: 1 Command: npm install request@2.40.0 fs-extra@2.0.0 tar@5.0.... See more...
Hi I am building a Node project that includes appdynamics. It's built via Yarn and it fails at the appdynamics-native step: Exit code: 1 Command: npm install request@2.40.0 fs-extra@2.0.0 tar@5.0.0 && node install.js appdynamics-native-native appdynamics-native 8373.0.0  I checked the install script for appdynamics-native. I found it gets the architecture string from process.arch and tries to match it against x64, win32, or ia32, and if it can't it returns null and then later exits with an error. Presumably there's no arm64 build for appdynamics-native (I don't see arm64 on the supported OS for nodeJS page).  I can modify the install script to force the x64 download (which should be able to run via Rosetta) but Yarn will overwrite the changes during build. The only workaround I found that works is using NVM to install a x64 Node binary and then build the project but it's not great to have to run the whole thing via Rosetta rather than just this module. Does anyone know of a better approach?
Hi All, We are currently in-progress of onboarding the okta identity cloud logs, we are using Splunk built add-on for okta identity cloud. when we configure input for test instance of okta cloud it... See more...
Hi All, We are currently in-progress of onboarding the okta identity cloud logs, we are using Splunk built add-on for okta identity cloud. when we configure input for test instance of okta cloud it works perfectly fine, but we are configuring the input for okta cloud production logs are not coming in. we have tried below steps. Disabling and re-enabling the input. Deleting and re-creating the input. Creating a new API input in Okta. Changing the configuration items to high and low values. Changed the interval to higher values. Reviewed internal logs for errors. Testing the API key locally (which was successful). Configured the API key in different heavy forwarder   While checking on okta side it shows rate limit warning.  Any help would be very appreciated.   Thanks, Bhaskar Chourasiya
Splunk logs missing for few scheduler jobs Is there way to find the missing logs using some advanced search
I want to configure two HEC tokens as the same because I want to load balance traffic between them. I followed the document - https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UseHECusing... See more...
I want to configure two HEC tokens as the same because I want to load balance traffic between them. I followed the document - https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UseHECusingconffiles,   I edited the /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf: 1. Create new stanza - The name of the stanza is the same as the HEC token name that I want to edit [hec-test1] 2. Under the stanza, specify the new token value for overriding token = xxxxx     After the edit, it does not work. The Splunk even return error: Checking: /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf Invalid key in stanza [hec-test1] in /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf, line 4: token (value: xxxxx).
I followed Microsoft’s recommendations for security events for domain joined computers.  My window server logs are massive now over 26GB.  We are using heavy forwarders to get the data to Splunk.  Wh... See more...
I followed Microsoft’s recommendations for security events for domain joined computers.  My window server logs are massive now over 26GB.  We are using heavy forwarders to get the data to Splunk.  What is the best way to ensure that I am not getting a lot of ancillary data not needed for Security dash boarding?  Is there a sample input.conf that will filter only for security events?
I need to create a search and subsearch to exclude results in a query.  the primary search is a lookup table. the subsearch is a query on events that extracts a field I want to use to join to the p... See more...
I need to create a search and subsearch to exclude results in a query.  the primary search is a lookup table. the subsearch is a query on events that extracts a field I want to use to join to the primary search. the common field is hostname. If a given hostname in the lookup table is found in the subsearch i want to discard it.   primary search | inputlookup hosts.csv field = hostname output: host1 host2 host3 subsearch index=abc message="for account" sourcetype=type1 rex field=names"(?<hostname>\S+) field hostname output: host3   I want the following output: hostname host1 host2 I want to discard host3 since its in the subquery.  How do I correlate the searches to do this? I can't use  a join because the hostname in the subsearch is not computed until the subquery is executed.  Thanks in Advance.    
I'm attempting to utilize a lookup to pass static strings to create 'stats' commands. The result is sent to the search but it's treated as a large string instead of the various  values/statistical op... See more...
I'm attempting to utilize a lookup to pass static strings to create 'stats' commands. The result is sent to the search but it's treated as a large string instead of the various  values/statistical operations that are part of the search. I'm wondering if there's a way to get Splunk to interpret the command as intended.
I tried to do it this way, but the results don't match. How can i show the result of the first search and then the second one in columns of the correct order? | rest /services/data/transforms/l... See more...
I tried to do it this way, but the results don't match. How can i show the result of the first search and then the second one in columns of the correct order? | rest /services/data/transforms/lookups | table eai:acl.app eai:appName filename title fields_list updated id * | where type="file" | map maxsearches=1000 search="| inputlookup $filename$ | stats count | where count = 0 | eval lookup_vazia=$filename$" |append [ search index=_internal sourcetype=lookup_editor_rest_handler "Lookup edited successfully" | stats count by _time user namespace lookup_file | rename lookup_file as "lookup_editada" ]
We have an custom app which contains just props and transforms configs... When we try to upload app.tgz file. it throws below failures Need some insights on this. Source code and binaries standar... See more...
We have an custom app which contains just props and transforms configs... When we try to upload app.tgz file. it throws below failures Need some insights on this. Source code and binaries standards [ failure ] Check that files outside of the bin/ and appserver/controllers directory do not have execute permissions and are not .exe files. On Unix platform, Splunk recommends 644 for all app files outside of the bin/ directory, 644 for scripts within the bin/ directory that are invoked using an interpreter (e.g. python my_script.py or sh my_script.sh), and 755 for scripts within the bin/ directory that are invoked directly (e.g. ./my_script.sh or ./my_script). On Windows platform, Splunk recommends removing user's FILE_GENERIC_EXECUTE for all app files outside of the bin/ directory except users in ['Administrators', 'SYSTEM', 'Authenticated Users', 'Administrator']. This file has execute permissions for owners, groups, or others. File: default/transforms.conf This file has execute permissions for owners, groups, or others. File: metadata/default.meta This file has execute permissions for owners, groups, or others. File: default/props.conf This file has execute permissions for owners, groups, or others. File: default/app.conf