All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello i'm getting this error also the python does exists in the commands.conf and also it was working earlier today. any suggestions ?
The search head that our security team uses is filling up the /opt/splunk/var/lib/splunk/kvstore/. The directory is about 400+GB.  To be honest I'm not sure what role the kvstore plays but I'm feeli... See more...
The search head that our security team uses is filling up the /opt/splunk/var/lib/splunk/kvstore/. The directory is about 400+GB.  To be honest I'm not sure what role the kvstore plays but I'm feeling like it shouldn't be taking up this much space.  Any tips on how to clean this up without messing with the functionality?
Hi everyone can someone please help me out , how to monitor the following things in Jenkins with SPLUNK : - Load of the master (CPU / RAM) - Number of slaves - Duration of connections of slaves -... See more...
Hi everyone can someone please help me out , how to monitor the following things in Jenkins with SPLUNK : - Load of the master (CPU / RAM) - Number of slaves - Duration of connections of slaves - SDOM monitoring - build queue as graph - average build time per day as graph Quick solution would be appreciated as i am trying to find it from last few days , Thank you
case: transfer data as json format from splunk 6.x to splunk 8 or splunk8.1,failed, did not parse the json format success by the following set up: but i can do work well from splunk6.x to splunk 7... See more...
case: transfer data as json format from splunk 6.x to splunk 8 or splunk8.1,failed, did not parse the json format success by the following set up: but i can do work well from splunk6.x to splunk 7, so is this a splunk 8 bug? or does there has some workaround solution?  
We are trying to implement a security solution on splunk for a client that has multiple data sources on multiple countries. We want to ingest data from all these countries to the same splunk instance... See more...
We are trying to implement a security solution on splunk for a client that has multiple data sources on multiple countries. We want to ingest data from all these countries to the same splunk instance and we've decided to separate each countrie's data by adding a metadata field. Each country has its own user and can only see their data. We achieved this by defining a new role and creating a search filter to set search restrictions for this role.  The problem is that this security solution uses accelerated data models and this way of filtering is not working on accelerated data models, where a country can see other countries data.  Is there any kind of search that works properly on accelerated data models where i can filter data by a metadata field?
Hi Splunk Members, Good Day! I am looking for support to create a query with Windows Security Events Logs. Basically the idea is to search for different Windows Security Event IDs events within a s... See more...
Hi Splunk Members, Good Day! I am looking for support to create a query with Windows Security Events Logs. Basically the idea is to search for different Windows Security Event IDs events within a short span by Account Name. Example: To return events if events id: XXXX occurs at 1 PM EST then within 1 min i.e 1:01 PM; Event ID: YYYY occurs and then event id: ZZZZ also occurs for same Account_Name then my query should results all those events. Kindly note, condition should met if all three event ids trigger within short span that is 1 min here. If for an example Event ID XXXX occurs but Event IDs YYYY and ZZZZ do not for the same Account Name then results should be zero in the ouput. Many Thanks.        
Hi @aplura_llc_supp/@mikerefakis,   We are using the NetSkope App v1.2.0 in our Splunk instance (v7.2.9.1) to collect the data from Netskope. We are encountering an issue where we are trying to fe... See more...
Hi @aplura_llc_supp/@mikerefakis,   We are using the NetSkope App v1.2.0 in our Splunk instance (v7.2.9.1) to collect the data from Netskope. We are encountering an issue where we are trying to fetch the data through the app, but fails to do so even after changing the token in NetSkope app in Splunk. It is worth noting that the token from NetSkope was changed. So we updated the token on NetSkope app in Splunk but the issue still persists even after token update. Below are the errors received in Splunk:   log_level=ERROR pid=108789 tid=MainThread file="netskope_client.py" function="next_page" line_number="134" version="NetSkopeAppForSplunk.v1.2.0.b252" exception='data' message='data' exception_type=KeyError exception_arguments='data' filename=netskope_client.py exception_line=82   log_level=ERROR pid=65944 tid=MainThread file="Utilities.py" function="get_credential" line_number="340" version="NetSkopeAppForSplunk.v1.2.0.b252" message="Splunkd daemon is not responding: ('Error connecting to /servicesNS/nobody/NetSkopeAppForSplunk/storage/passwords: [Errno 104] Connection reset by peer',)" exception_type="SplunkdConnectionException" exception_arguments="Splunkd daemon is not responding: ('Error connecting to /servicesNS/nobody/NetSkopeAppForSplunk/storage/passwords: [Errno 104] Connection reset by peer',)" filename="Utilities.py" line="311"   Regards, Kevin
I have dynamic archive storage alotment and with professional services we migrated our on-prem archive for buckets containing data up to 365 days old to archive storage however if i select archive in... See more...
I have dynamic archive storage alotment and with professional services we migrated our on-prem archive for buckets containing data up to 365 days old to archive storage however if i select archive in the indexes view I see two large indexes in particular that for some reason show data start dates in 2017 and feb 2019 (365 days retention would be october 2019). which are pushing me over my storage allotment... is there a way i can query or view (read-only) the bucket detail in dynamic archive storage (splunk S3) and their dates in order to identify which buckets have a long data span dates and request these to be removed? I have opened a ticket with support and they are investigating also... I realize that the retention removal is actually based on bucket end dates so i must have some buckets that for some reason have corrupted dates or span from 2017 or oct 2019 with end dates later than October 2019... Is there a way for the cloud admin user to query the dynamic archive storage to list data dirs to discover buckets and there individual data start and end dates...???
Hi,   If i have a directory structure like this: /logs/server1 /logs/server2 /logs/server3   And i have set specific inputs.conf stanzas for server 1,2 and 3, but i also want to catch everythi... See more...
Hi,   If i have a directory structure like this: /logs/server1 /logs/server2 /logs/server3   And i have set specific inputs.conf stanzas for server 1,2 and 3, but i also want to catch everything that ends up in /logs/, could i set the server1-3 stanzas and have a catch all stanza, or will that catch all stanza collect the logs from server 1-3 again?         [monitor:///logs/server1] host_segment = 2 index = foo sourcetype = aaa [monitor:///logs/server3] host_segment = 2 index = bar sourcetype = bbb [monitor:///logs/server3] host_segment = 2 index = baz sourcetype = ccc [monitor:///logs] index = unknown        
Hello, Has anyone else found it annoying to try and sift through the Timeline Table when analyzing a Session?  The timeline table can be 100+ rows, but it only shows you several depending on your ... See more...
Hello, Has anyone else found it annoying to try and sift through the Timeline Table when analyzing a Session?  The timeline table can be 100+ rows, but it only shows you several depending on your resolution.  There's no way I found to expand this Table or export the results so we can analyze the full session in one view. Is there any workaround found for this? Thanks, Tom
Hello I keep getting  SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:742) when trying to add the input.  I have added ssl certs from aws, as well as the ones we use,... See more...
Hello I keep getting  SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:742) when trying to add the input.  I have added ssl certs from aws, as well as the ones we use, to my host in cacert.pem in several locations but I still get the error. The proxy we are using allows the traffic through but I am still unable to get a successful cert verify.   Any ideas what might cause that? Thanks!
Anyone knows how to create annotations in a dashboard built using Splunk Dashboards app (beta)? I did a lot of research through the docs and nothing is mentioned anywhere.
Hello everyone, this is my second post regarding same question I am using plugin splunk for jenkins and trying to send data from jenkins to splunk , can someone help me out 1.what changes should be... See more...
Hello everyone, this is my second post regarding same question I am using plugin splunk for jenkins and trying to send data from jenkins to splunk , can someone help me out 1.what changes should be made in Splunk for Jenkins Configuration . should there be changes in the Customize Event Processing Command or what? how can i make sure that there is continous flow of data From jenkins to splunk ? which script or command should be used  here :   2- how can i know that now the data/logs is getting into splunk from jenkins and there continous flow of data please give me a step by step answer to the solution Thank you
Dear Team, This is my setup for Splunk at home: 1 - splunk enterprise standalone 2 - multiple AWS accounts with Cloudtrails enabled. 3 - Splunk add-on for AWS I've completed the setup and have t... See more...
Dear Team, This is my setup for Splunk at home: 1 - splunk enterprise standalone 2 - multiple AWS accounts with Cloudtrails enabled. 3 - Splunk add-on for AWS I've completed the setup and have the Splunk add-on for AWS running with Cloudflare log received from S3 to my index=cloudflare. I've completed the cloudtrail log setup for index=cloudtrail1. However, right now i have about 8 differents AWS account for different purposes with cloudtrail enabled, how can i put them all into one index, but do not have to create 8 different ones?
Hi, We have a custom requirement to monitor an API which does a health check of all the backend systems and returns back the https response code 200 if successful. The API is working with 2 way SSL.... See more...
Hi, We have a custom requirement to monitor an API which does a health check of all the backend systems and returns back the https response code 200 if successful. The API is working with 2 way SSL. Is there a possibility of using the URL monitor in order to communicate with the API? We are able to deploy the URL monitor even locally if there is a way to bypass SSL using the configuration in the URL Monitor. We tried the Service Availability feature also but keep getting an SSL handshake error and after checking with support, it apparently does not support 2 way SSL. Appreciate any ideas on how I should go about achieving this. Kind Regards, Ashley Lewis
Please help me with the transforms.conf Available indexes details index_pr_prod index_ee_psvt index_np_psup   Index has 3 parts, 1st part is same , 2nd part we need to derive from host and 3rd ... See more...
Please help me with the transforms.conf Available indexes details index_pr_prod index_ee_psvt index_np_psup   Index has 3 parts, 1st part is same , 2nd part we need to derive from host and 3rd part we need to derive from asset_env I tried the below transforms but it did not work, do we need two different stanza configs in transforms.com? [change_index_name] SOURCE_KEY = MetaData:Host REGEX = ^host::\w{12}(?i)(ee|pr|ps) DEST_KEY = _MetaData:Index SOURCE_KEY = MetaData:Asset_Env REGEX = ^asset_env::\w{5} DEST_KEY = _MetaData:Index FORMAT = index_$1_$2
Hi,  I would like to send aternity data to splunk and I was wondering if anyone here has done this before, because I can´t find a documentation on it. I am not familiar with the product or any api o... See more...
Hi,  I would like to send aternity data to splunk and I was wondering if anyone here has done this before, because I can´t find a documentation on it. I am not familiar with the product or any api or what ever option there might be. I found that there is an splunk app for aternity https://splunkbase.splunk.com/app/4682/ but this is just for dashboards and a TA (https://splunkbase.splunk.com/app/1140/) which is just providing some props and transforms, but no input. There is also something called "Cloud-Native Integration for Riverbed APM"( http://portal.it-squared.net/wp-content/uploads/2019/08/Splunk-and-IT-Squared-Cloud-Native-Integration-for-Riverbed-APM-Documentation.pdf ) which could use HEC, but I can´t find how to set this up, its just stating "Once Splunk is configured follow the IT Squared / Cloud-Native Integration for Riverbed APM documentation to deploy it and set up data ingestion into Splunk." Another hint was, that the agents store the data in databases, so there might be a way to get the data with DB Connect, but thats the last option I want to consider. Any help or hint is highly appreciated Thank you David
Hi everyone I need to extract value from a string before a specific character "_X"   Where X is any integer Please note our string is like a_b_c_X Could you please advice how can I do that  T... See more...
Hi everyone I need to extract value from a string before a specific character "_X"   Where X is any integer Please note our string is like a_b_c_X Could you please advice how can I do that  Thank you in advance ☺️  
Hi Team, My splunk has been upgraded with new version. When I search with index I am able to see below Errors: index="ABC" Can someone guide me on this.
I am unable to login to a recently created Trial.  From My Instances, I do not have the option to INVITE USERS. Also, PRODUCT: Splunk>Cloud Trial SIZES: 5GB START DATE: OCT25, 2020 EXPIRATION DA... See more...
I am unable to login to a recently created Trial.  From My Instances, I do not have the option to INVITE USERS. Also, PRODUCT: Splunk>Cloud Trial SIZES: 5GB START DATE: OCT25, 2020 EXPIRATION DATE: Nov 09, 2020 INSTANCE NAME: testsplunk No INVITE USERS only ACCESS INSTANCE Thanks !!!