All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have installed Splunk DB Connect App 3.6.0 version on Splunk 8.0.5 Platform. I keep getting this error 'Cannot communicate with task server, please check your settings.' When I started to... See more...
Hi, I have installed Splunk DB Connect App 3.6.0 version on Splunk 8.0.5 Platform. I keep getting this error 'Cannot communicate with task server, please check your settings.' When I started to restart task server i get 'Failed to restart'. So far I have checked the following. Confirmed the java path file has correct info(/opt/app/jdk1.8.0_51) Confirmed that port 9998 and 9999 are not currently used Confirmed ports are no blocked by any firewall I have tried using other ports instead of 9998 like 1025 but still the error exists Restarted Splunk Server couple of time Also,I see the following error when I run 'index=_internal sourcetype="dbx*" *ERROR*' [ERROR] [settings.py], line 133: unable to update java path file [/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/linux_x86/bin/customized.java.path] [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most recent call last): File "/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 76, in handle_POST self.validate_java_home(payload["javaHome"]) File "/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 215, in validate_java_home is_valid, reason = validateJRE(java_cmd) File "/opt/app/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/jre_validator.py", line 73, in validateJRE output = output.decode('utf-8') AttributeError: 'str' object has no attribute 'decode' Any solution to resolve this would be of great help. Thanks!
Hi I am building a Node project that includes appdynamics. It's built via Yarn and it fails at the appdynamics-native step: Exit code: 1 Command: npm install request@2.40.0 fs-extra@2.0.0 tar@5.0.... See more...
Hi I am building a Node project that includes appdynamics. It's built via Yarn and it fails at the appdynamics-native step: Exit code: 1 Command: npm install request@2.40.0 fs-extra@2.0.0 tar@5.0.0 && node install.js appdynamics-native-native appdynamics-native 8373.0.0  I checked the install script for appdynamics-native. I found it gets the architecture string from process.arch and tries to match it against x64, win32, or ia32, and if it can't it returns null and then later exits with an error. Presumably there's no arm64 build for appdynamics-native (I don't see arm64 on the supported OS for nodeJS page).  I can modify the install script to force the x64 download (which should be able to run via Rosetta) but Yarn will overwrite the changes during build. The only workaround I found that works is using NVM to install a x64 Node binary and then build the project but it's not great to have to run the whole thing via Rosetta rather than just this module. Does anyone know of a better approach?
Hi All, We are currently in-progress of onboarding the okta identity cloud logs, we are using Splunk built add-on for okta identity cloud. when we configure input for test instance of okta cloud it... See more...
Hi All, We are currently in-progress of onboarding the okta identity cloud logs, we are using Splunk built add-on for okta identity cloud. when we configure input for test instance of okta cloud it works perfectly fine, but we are configuring the input for okta cloud production logs are not coming in. we have tried below steps. Disabling and re-enabling the input. Deleting and re-creating the input. Creating a new API input in Okta. Changing the configuration items to high and low values. Changed the interval to higher values. Reviewed internal logs for errors. Testing the API key locally (which was successful). Configured the API key in different heavy forwarder   While checking on okta side it shows rate limit warning.  Any help would be very appreciated.   Thanks, Bhaskar Chourasiya
Splunk logs missing for few scheduler jobs Is there way to find the missing logs using some advanced search
I want to configure two HEC tokens as the same because I want to load balance traffic between them. I followed the document - https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UseHECusing... See more...
I want to configure two HEC tokens as the same because I want to load balance traffic between them. I followed the document - https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UseHECusingconffiles,   I edited the /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf: 1. Create new stanza - The name of the stanza is the same as the HEC token name that I want to edit [hec-test1] 2. Under the stanza, specify the new token value for overriding token = xxxxx     After the edit, it does not work. The Splunk even return error: Checking: /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf Invalid key in stanza [hec-test1] in /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf, line 4: token (value: xxxxx).
I followed Microsoft’s recommendations for security events for domain joined computers.  My window server logs are massive now over 26GB.  We are using heavy forwarders to get the data to Splunk.  Wh... See more...
I followed Microsoft’s recommendations for security events for domain joined computers.  My window server logs are massive now over 26GB.  We are using heavy forwarders to get the data to Splunk.  What is the best way to ensure that I am not getting a lot of ancillary data not needed for Security dash boarding?  Is there a sample input.conf that will filter only for security events?
I need to create a search and subsearch to exclude results in a query.  the primary search is a lookup table. the subsearch is a query on events that extracts a field I want to use to join to the p... See more...
I need to create a search and subsearch to exclude results in a query.  the primary search is a lookup table. the subsearch is a query on events that extracts a field I want to use to join to the primary search. the common field is hostname. If a given hostname in the lookup table is found in the subsearch i want to discard it.   primary search | inputlookup hosts.csv field = hostname output: host1 host2 host3 subsearch index=abc message="for account" sourcetype=type1 rex field=names"(?<hostname>\S+) field hostname output: host3   I want the following output: hostname host1 host2 I want to discard host3 since its in the subquery.  How do I correlate the searches to do this? I can't use  a join because the hostname in the subsearch is not computed until the subquery is executed.  Thanks in Advance.    
I'm attempting to utilize a lookup to pass static strings to create 'stats' commands. The result is sent to the search but it's treated as a large string instead of the various  values/statistical op... See more...
I'm attempting to utilize a lookup to pass static strings to create 'stats' commands. The result is sent to the search but it's treated as a large string instead of the various  values/statistical operations that are part of the search. I'm wondering if there's a way to get Splunk to interpret the command as intended.
I tried to do it this way, but the results don't match. How can i show the result of the first search and then the second one in columns of the correct order? | rest /services/data/transforms/l... See more...
I tried to do it this way, but the results don't match. How can i show the result of the first search and then the second one in columns of the correct order? | rest /services/data/transforms/lookups | table eai:acl.app eai:appName filename title fields_list updated id * | where type="file" | map maxsearches=1000 search="| inputlookup $filename$ | stats count | where count = 0 | eval lookup_vazia=$filename$" |append [ search index=_internal sourcetype=lookup_editor_rest_handler "Lookup edited successfully" | stats count by _time user namespace lookup_file | rename lookup_file as "lookup_editada" ]
We have an custom app which contains just props and transforms configs... When we try to upload app.tgz file. it throws below failures Need some insights on this. Source code and binaries standar... See more...
We have an custom app which contains just props and transforms configs... When we try to upload app.tgz file. it throws below failures Need some insights on this. Source code and binaries standards [ failure ] Check that files outside of the bin/ and appserver/controllers directory do not have execute permissions and are not .exe files. On Unix platform, Splunk recommends 644 for all app files outside of the bin/ directory, 644 for scripts within the bin/ directory that are invoked using an interpreter (e.g. python my_script.py or sh my_script.sh), and 755 for scripts within the bin/ directory that are invoked directly (e.g. ./my_script.sh or ./my_script). On Windows platform, Splunk recommends removing user's FILE_GENERIC_EXECUTE for all app files outside of the bin/ directory except users in ['Administrators', 'SYSTEM', 'Authenticated Users', 'Administrator']. This file has execute permissions for owners, groups, or others. File: default/transforms.conf This file has execute permissions for owners, groups, or others. File: metadata/default.meta This file has execute permissions for owners, groups, or others. File: default/props.conf This file has execute permissions for owners, groups, or others. File: default/app.conf
For Salesforce monitoring, we initially had an interesting field called the Username for all the searches. However, it is not available anymore and for every search, we have been receiving the follow... See more...
For Salesforce monitoring, we initially had an interesting field called the Username for all the searches. However, it is not available anymore and for every search, we have been receiving the following error :  The following error(s) occurred while the search ran. Therefore, search results might be incomplete- Could not load lookup=LOOKUP-SFDC-USER_NAME1 Could not load lookup=LOOKUP-SFDC-USER_NAME2 Could anyone please let me know how can I fix this, thanks in advance!
Hello Everyone, Recently I got to know about a feature in AppDynamics where we can trigger scripts on a HR violation. I am really excited to use this functionality for our project. I am looking for... See more...
Hello Everyone, Recently I got to know about a feature in AppDynamics where we can trigger scripts on a HR violation. I am really excited to use this functionality for our project. I am looking for some real time use cases where this has been implemented and has resolved a great problem. 1. Currently I have written a script where I will be restarting an application when ever it goes down (App Availability HR get violated). This has been working successfully. 2. I have also written a script to purge the old logs when disk space utilization goes above certain threshold. This works fine as well. I am looking for some other use cases where this has been used or can be used. It would be really great if I can get suggestions and ideas on this. Thank You, Saad.
  Hello, I have the following type of event, and I would like to extract the `tags` field into its respective fields.    2022-10-17 06:50:00.997, root_device_name="/dev/sda1", root_device_type=... See more...
  Hello, I have the following type of event, and I would like to extract the `tags` field into its respective fields.    2022-10-17 06:50:00.997, root_device_name="/dev/sda1", root_device_type="ebs", state_name="running", subnet_id="subnet-REDACTED", tags="{"App": "myapp", "Name": "myserver", "Owner": "myteam", "Scope": "myscope", "AWSBackup": "True", "Environment": "myenv", "Compliance requirement": "N/A"}", virtualization_type="hvm", vpc_id="vpc-REDACTED"   I have tried the following which did not work for me:    index=myindex sourcetype=mysourcetype earliest=@d i-REDACTED source=awsec2instances | spath input=tags   How do I extract these JSON fields from an event like this? 
Hello Team, I'm new to splunk, trying to get some insight/help for the below issue I'm trying to read data from 2 different indexes and create a consolidated table. The scenarios here is the field ... See more...
Hello Team, I'm new to splunk, trying to get some insight/help for the below issue I'm trying to read data from 2 different indexes and create a consolidated table. The scenarios here is the field values are same but the field names are different.  index="itsi_grouped_alerts" source="ABC" sourcetype=itsi_notable:group | where itsi_group_id="8a84c088-ba86-4d0a" index="itsi_notable_audit" source="Notable Event Audit" sourcetype=itsi_notable:audit event_id="8a84c088-ba86-4d0a" When i try to use a join command, doesn't gives any error. Appreciate your assistance
I have a query in a panel, that is being outputted in a table. Can I adjust the width of one of the columns, shrinking it, so that text is then wrapped across multiple lines?
Soo I have been able to setup and create the different monitors for my universal forwarder. Im working in a test environment so I dont need ssl, however I am attempting to monitor change to a ubuntu ... See more...
Soo I have been able to setup and create the different monitors for my universal forwarder. Im working in a test environment so I dont need ssl, however I am attempting to monitor change to a ubuntu 16.04 via the universal forwarder. the data is pretty sparse, i initially thought it was because there is no user interaction.  Now I get some logs but i also get a 500 internal web error . any idea on the cause of this ? and why am I not getting the logs from tmp or user access logs ?   
I'm asking this question because the only solutions I find for this problem are with the XML config file, while I only have access the JSON source code. I've looked at the recent Splunk documentation... See more...
I'm asking this question because the only solutions I find for this problem are with the XML config file, while I only have access the JSON source code. I've looked at the recent Splunk documentation and there doesn't seem to exist a depends field for the visualisation configuration in the JSON source file. I'm trying to achieve the hiding of a specific panel if a dropdown choice isn't selected. Any help would be appreciated. Thanks.
Hi, Utter Noob here - I apologise for any really silly questions! I'm installing Universal Forwarder to several machines which will forward data to a further intermediate instance, and then on to... See more...
Hi, Utter Noob here - I apologise for any really silly questions! I'm installing Universal Forwarder to several machines which will forward data to a further intermediate instance, and then on to Enterprise. My question is around the User Account that UF wants when I'm installing it.  does this have to be a local service account or can it be a Domain User account?  I'm asking as when I on a domain joined machine, I have created a SplunkAdmin local user, but when I go to the Local Security  Policy > Local Policies>User Rights Assignment > Log in as a service to add the local account the account is not shown, just the Domain accounts and groups. Does this mean I need to create a Splunk account at the domain (AD) level and use it on all machines where I am installing Splunk Universal Forwarder?  Thanks any and all help!
Hello Hi, For evengen, Can I place the sample file in a separate directory under samples directory:   e,g: /opt/splunk/etc/apps/SA-Eventgen/samples/my-samples/test1-sample.csv   If  I can p... See more...
Hello Hi, For evengen, Can I place the sample file in a separate directory under samples directory:   e,g: /opt/splunk/etc/apps/SA-Eventgen/samples/my-samples/test1-sample.csv   If  I can place the sample file like above, where can I specify the location of the test1-sample.csv  in eventgen.conf?   Thank you in advance for your help
Hello all, I am trying to upgrade my Splunk Enterprise from 8.2.0 to 9.0.0 and I keep running into this error when the installer is done copying new files and does a 'rollback action'. "setup canno... See more...
Hello all, I am trying to upgrade my Splunk Enterprise from 8.2.0 to 9.0.0 and I keep running into this error when the installer is done copying new files and does a 'rollback action'. "setup cannot copy the file splknetdrv.sys ensure that the location specified below is correct or change it and insert splunk network monitor kernel driver in the drive you specify" The file is in the correct location but when I hit retry it asks again. I finally just cancel and it asks "continue setup without copying file?" which I say yes. It then asks me the other two system files "SplunkMonitorNoHandleDrv.sys and splunkdrv.sys". Which brings the same error and says installation failed.   I ran as admin and also through powershell. I looked through my logs also and did not see any problems.