All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have been searching for a post similar to mine and just have not found it. I have to add some user inputs to a search. The process is we are assigning gps units to specific serial numbers f... See more...
Hello, I have been searching for a post similar to mine and just have not found it. I have to add some user inputs to a search. The process is we are assigning gps units to specific serial numbers for tracking purposes. The gps unit will be put on a product then removed when the product leaves one of our facilities. I would like to be able to barcode scan the unit then barcode scan the serial number and have it be in my searches. I threw together a lookup file to do this, but I feel like there is a better way to do this.  The only references i would really need are the gps_id and the serial_number.  I'll gladly give more description as necessary. I'm only partly familiar with more advnaced html, css, and js. 
Dears,   Greetings.   When I navigate to Apps > Manage Apps > Cisco eStreamer eNcore for Splunk, I can't find the "Set Up" button to open the configuration page.   Anyone can help me solve this... See more...
Dears,   Greetings.   When I navigate to Apps > Manage Apps > Cisco eStreamer eNcore for Splunk, I can't find the "Set Up" button to open the configuration page.   Anyone can help me solve this issue ?   I use the latest version 4.0.11.   Regards, Ahmed Foda
Page Navigation is not working in new dashboard. I just added table panel showing all events and when trying to use page links, Page 1 or prev link not working. Version I am using Splunk Dashboard St... See more...
Page Navigation is not working in new dashboard. I just added table panel showing all events and when trying to use page links, Page 1 or prev link not working. Version I am using Splunk Dashboard Studio Version: 1.0.2 Build: 61341731
Hi all, I'm trying to create a chart containing two timecharts for different time frames (e.g. today/yesterday). How can I achieve it? Currently I'm getting it one after another on the same graph. I... See more...
Hi all, I'm trying to create a chart containing two timecharts for different time frames (e.g. today/yesterday). How can I achieve it? Currently I'm getting it one after another on the same graph. I'd like basically to overlay one timechart on another one.   index=ddos device_event_class_id=Bandwidth earliest=-1d@d latest=-0d@d | rex field=msg "msg=.+raffic.+'(?<pg_name>[\w\s\-]+)'.+(?<bps>\d+\.\d+\s.+)\..+" | eval ReportKey="yersterday" | timechart span=3h count by pg_name | append [search index=ddos device_event_class_id=Bandwidth earliest=-2d@d latest=-1d@d | rex field=msg "msg=.+raffic.+'(?<pg_name>[\w\s\-]+)'.+(?<bps>\d+\.\d+\s.+)\..+" | eval ReportKey="beforeyesterday" | timechart span=3h count by pg_name ] | fillnull value=0 | eval mytime=strftime(_time, "%H:%M") | sort mytime   Thanks in advance.
Hello everyone, Someone may already be doing the output of grouped events with the definition of location by ip. How not to lose location data when grouping events ? In my request spl it is ... See more...
Hello everyone, Someone may already be doing the output of grouped events with the definition of location by ip. How not to lose location data when grouping events ? In my request spl it is | search...... |stats count(tunnelid) as sessioncount, values(StartTime) as StartTime, values(tunnelid) as tunnelid, values(tunnelip) as tunnelip, values(remip) as remip, values(vendor_action) as vendor_action by user | iplocation remip Of course, when displaying one type, the location IP is displayed. How to display data on the location of each IP in grouped events ?  
Hi all, Previously we used to use the endpoint /servicesNS/nobody/my_app_name/admin/savedsearch/_reload to reload savedsearches while the search head was online. Since moving to Splunk 8, this endp... See more...
Hi all, Previously we used to use the endpoint /servicesNS/nobody/my_app_name/admin/savedsearch/_reload to reload savedsearches while the search head was online. Since moving to Splunk 8, this endpoint does nothing. Searches are still running if I set disabled=1 manually in the config, yet the endpoint returns a 200 OK, as below: /opt/splunk//bin/splunk _internal call /servicesNS/-/-/admin/savedsearch/_reload -user admin QUERYING: 'https://127.0.0.1:8089/servicesNS/-/-/admin/savedsearch/_reload' HTTP Status: 200. Content: <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://127.0.0.1:8089/servicesNS/-/-/admin/savedsearch</id> <updated>2021-04-15T11:22:04+02:00</updated> <generator build="63079c59e632" version="8.1.3"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/-/-/admin/savedsearch/_new" rel="create"/> <link href="/servicesNS/-/-/admin/savedsearch/_reload" rel="_reload"/> <link href="/servicesNS/-/-/admin/savedsearch/_acl" rel="_acl"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>   Has this changed? Are we not able to manually hit this endpoint anymore?  We can still hit https://localhost:8089/en-GB/debug/refresh to reload all entities (specifying entity=admin/savedsearch doesn't seem to work either), but this can time out on behemoth search heads and leaves us having to restart the search head if the config is not applying. Is there anything we can do about this? How do we trigger a manual reload of the savedsearch config in Splunk 8? Thanks! Kind regards, Alex
Hi!  I have a data set consisting of a csv-file with three columns with numerical data. I have performed my own implementation that clusters the data set with K-means and then calculates outliers b... See more...
Hi!  I have a data set consisting of a csv-file with three columns with numerical data. I have performed my own implementation that clusters the data set with K-means and then calculates outliers based on euclidean distance between data points and the cluster centroids.  I wan't to perform the same kind of operation in Splunk but have not been successfull so far.  I have tried local outlier factor, with the following query in search:  source="dataset.csv" | fit LocalOutlierFactor 0,1,2 | search isOutlier="1.0" However, the result from this search is very poor since very few outliers are detected. The data set is labeled making it easy to see correctly classified outliers.  I have also tried with "Detect numeric outliers" from the machine learning toolkit but there, I can only chose one field to analyze and I have three fields.    Is there an optimal solution to the problem of finding outliers in this type of dataset?    Thanks in advance!
I have the address of a data point that is being sent to Splunk. When I search for the data point in Splunk, I get the address, however it is messed up and additionally zeros have been added to the a... See more...
I have the address of a data point that is being sent to Splunk. When I search for the data point in Splunk, I get the address, however it is messed up and additionally zeros have been added to the address. What could be the reason for this error and what can I change to fix the problem? the address as it is sent to Splunk: 1.1.98:20.14.4.137.1 the address as it arrives in Splunk search: 001.001.098:00137 0.020.014.0004.001
Using the Splunk Analytics for Hadoop, is it possible to update or delete a record?
Hi, we are seeing > 70,000 of these messages per day per instance on several Searchheads on Splunk 8.0.5.1 and SUSE Linux 12: WARN SearchResultsCSVSerializer - CSV file  contains invalid field ''... See more...
Hi, we are seeing > 70,000 of these messages per day per instance on several Searchheads on Splunk 8.0.5.1 and SUSE Linux 12: WARN SearchResultsCSVSerializer - CSV file  contains invalid field '', ignoring column. (there are actually two spaces after "file", and '' are two single quotes) In a Searchhead Cluster only the captain seems to report this. If I clone the Splunk installation from an affected Searchhead to another, similar but unaffected one that doesn't show the symptoms, I cannot reproduce the messages there. At startup it kicks in around here: 04-12-2021 16:56:47.361 +0200 INFO ServerRoles - Declared role=search_head. 04-12-2021 16:56:49.680 +0200 INFO ServerRoles - Declared role=kv_store. 04-12-2021 16:56:49.684 +0200 INFO CertStorageProvider - Updating status from starting to ready 04-12-2021 16:56:49.684 +0200 INFO CertStorageProvider - Updating status from starting to ready 04-12-2021 16:56:49.684 +0200 INFO Rsa2FA - Could not find [externalTwoFactorAuthSettings] in authentication stanza. 04-12-2021 16:56:50.911 +0200 WARN SearchResultsCSVSerializer - CSV file  contains invalid field '', ignoring column. When the system goes down it stops here: 04-12-2021 16:56:02.889 +0200 WARN SearchResultsCSVSerializer - CSV file  contains invalid field '', ignoring column. 04-12-2021 16:56:03.831 +0200 INFO loader - Shutdown HTTPDispatchThread 04-12-2021 16:56:03.831 +0200 INFO ShutdownHandler - Shutting down splunkd a) Has anyone seen this too? And if so, fixed it? How? b) How can I get Splunk to report some more detail, like who wants to open that file? I found no useful information on the SearchResultsCSVSerializer and "strace" on Linux did not provide any clue for me either. Thanks in advance Volkmar
I have following data: I am trying to generate SPL which provides me following: Essentially change_complete will be new field and will be marked "Yes" only if all the hosts for that... See more...
I have following data: I am trying to generate SPL which provides me following: Essentially change_complete will be new field and will be marked "Yes" only if all the hosts for that particular customer  has flag_enabled = "Yes" otherwise change_complete=No I am trying to use eval or stats function to get around it. But I got no luck.  
How to get proxy setting during python helper[validate_input] for Data Input in splunk addon
Hi, We have a requirement to push events/logs from our applications to different customers using splunk enterprise/cloud(events only specific to customer). Our application is a cloud solution and ru... See more...
Hi, We have a requirement to push events/logs from our applications to different customers using splunk enterprise/cloud(events only specific to customer). Our application is a cloud solution and runs on Kubernetes cluster. I am looking for a solution in which, one application can be used to filter and push to different customers splunk instance. Can you suggest which splunk application can be used to solve this. I researched 'Splunk Universal Forwarder' can be installed and can be used to push data, but can the same universal forwarder instance be used to push to multiple customer's splunk instance? I also saw the 'splunk connect for syslog' can be installed and can be used to push data to splunk instance. Can we apply that for this usecase. In case we have a better solution, please do let me know.
I created an app contain most of the required element include macro, lookup, navigator, UI, savedsearches, etc.   however, when I switched to the app first : go to setting -> macro,  then Splunk w... See more...
I created an app contain most of the required element include macro, lookup, navigator, UI, savedsearches, etc.   however, when I switched to the app first : go to setting -> macro,  then Splunk with gives me internal server error 500, or go to setting -> lookup, Splunk with gives me internal server error 500, or go to setting -> menu -> navigation, Splunk with gives me internal server error 500  if I switch to default search apps first, then go to setting and click the configuration like macro or lookup or menu, then I switch to my project app by drop-down menu,  in this case I can edit the content without getting internal server error 500 The Search head is a 4 members search-head cluster, build on Splunk 8.0.2 Linux on CentOS 7 any clue??
There are no data being index from our setup below. Does "Invalid key in stanza ..... line 36: kv_mode (value: xml)" is the reason? If yes, what's wrong with KV_MODE = xml? From Splunk Logs: 04-14-... See more...
There are no data being index from our setup below. Does "Invalid key in stanza ..... line 36: kv_mode (value: xml)" is the reason? If yes, what's wrong with KV_MODE = xml? From Splunk Logs: 04-14-2021 19:34:49.555 +0200 WARN Application - Invalid key in stanza [XXXX] in /opt/splunk/etc/deployment-apps/XXXXXX/local/props.conf, line 36: kv_mode (value: xml).\n   props.conf 27: [XXXX] 28: BREAK_ONLY_BEFORE = goblygook 29: MAX_EVENTS = 200000 30: DATETIME_CONFIG = NONE 31: CHECK_METHOD = modtime 32: pulldown_type = true 33: LEARN_MODEL = false 34: SHOULD_LINEMERGE = true 35: TRUNCATE = 0 36: KV_MODE = xml 37: TRANSFORMS-set = setnull, accept_xml_files transforms.conf #################### [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [accept_xml_files] REGEX = <?xml version DEST_KEY = queue FORMAT = indexQueue ####################   inputs.conf [monitor:///path/*.xml] index = idx_sample sourcetype = XXXX crcSalt = <SOURCE> initCrcLength = 512   Thank you.
I installed the Phantom App for Splunk & CIM app. And connectivity testing between Splunk and Phantom is passed. I created alerts on Splunk to send events to Phantom, but I didn't see any records i... See more...
I installed the Phantom App for Splunk & CIM app. And connectivity testing between Splunk and Phantom is passed. I created alerts on Splunk to send events to Phantom, but I didn't see any records in Phantom's events. From the index CIM_MODACTIONS, I can see an error log: “Unable to create container: cannot execute INSERT in a read-only transaction” So is that what's causing the problem? But I don't understand what permissions I'm missing.  
Hello All, I'm new to Splunk and have a task to complete that requires the following: a. Connect Splunk with Snowflake database or b. Load Splunk query generated output (csv) to Tableau for visual... See more...
Hello All, I'm new to Splunk and have a task to complete that requires the following: a. Connect Splunk with Snowflake database or b. Load Splunk query generated output (csv) to Tableau for visualization directly from Splunk i.e., Splunk + Tableau integration. The goal is to pull data from Splunk and load it into Snowflake, then visualize it. I've explored a bit and understood that there is no elegant way to have the connections integrated. As a result, I chose to use an ETL tool (Talend) to run a Splunk query, retrieve the response, and then load it into the Snowflake database. Could anyone please advise me on the best course of action for my situation? Also, parsing the generated output (JSON, XML) using Talend isn't working as desired. Any assistance will be greatly appreciated.   
I have logs like  {"message": "Submitted amount category1: 213, category2: 543.56, category3: 4343.00", "specialCustomer": "true"} I am trying to extract sum of amount by each category and cust... See more...
I have logs like  {"message": "Submitted amount category1: 213, category2: 543.56, category3: 4343.00", "specialCustomer": "true"} I am trying to extract sum of amount by each category and customerType. My query is like this     | search message="Submitted amount *" | rex field=message "(?<category>\w+): (?<amount>\d*\.?\d*)" | eval userType=if(isnotnull(specialCustomer), "Special", "Normal") | chart sum(amount) as Amount by userType, category | addtotals | eval category=case( category="category1", "Category 1", category="category2", "Category 2", category="category3", "Category 3")     However I am getting only one category data, what is wrong with my regex? do I need to use extract for this?
Hi Splunk Community. Good day. I am trying to add an AWS EC2 created instance with Splunk installed to it (standalone) as a slave to an on-prem Splunk License server. However, I am getting the error... See more...
Hi Splunk Community. Good day. I am trying to add an AWS EC2 created instance with Splunk installed to it (standalone) as a slave to an on-prem Splunk License server. However, I am getting the error below. ERROR LMTracker - failed to send rows, reason='Unable to connect to license master=https://XXXXXXX:8089 Error connecting: Connection reset by peer' From checking, both Splunk are up in the master and slave. I can see a connection from a curl test in the slave to the master. * Rebuilt URL to: telnet://XXXXXXX:8089/ * Trying 10.XX.XXX.XX... * TCP_NODELAY set * Connected to XXXXXXX (10.XX.XXX.XX) port 8089 (#0)   Please help advise. Thanks All. Cheers!
All,  I am completely new to this forum and I have managed to get my first TA working. I need to understand how you get the "input name" from the Inputs area. I was hoping that I could use helper.ge... See more...
All,  I am completely new to this forum and I have managed to get my first TA working. I need to understand how you get the "input name" from the Inputs area. I was hoping that I could use helper.get_arg('input_name') to get the value.  Again thanks for any information on how to pull that value.    Regards,   Lu