All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to let splunk know when ever the format like "32770": ALL_REQ:2 | CT_FLAG(32768) keep it as a single field value in csv . Data: "123","EMPTY","1766 Bytes","32770": ALL_REQ:2 | CT... See more...
Is there a way to let splunk know when ever the format like "32770": ALL_REQ:2 | CT_FLAG(32768) keep it as a single field value in csv . Data: "123","EMPTY","1766 Bytes","32770": ALL_REQ:2 | CT_FLAG(32768),"131680": 20(32) | CT_FLAG |MODIFIED:20000(131072),"44d5","200 bytes" using normal csv extraction splunk extracts fields to : field1 :123 field2: EMPTY field3: 1766 Bytes field4: "32770": ALL_REQ:2 | CT_FLAG(32768),"131680": 20(32) | CT_FLAG |MODIFIED:20000(131072), 44d5 field5: "200 bytes" splunk combines field4 & field5 into a single field. thereafter all other field values gets pre jumped . Result required after field extraction: field1 :123 field2: EMPTY field3: 1766 Bytes field4: "32770": ALL_REQ:2 | CT_FLAG(32768) field5: "131680": 20(32) | CT_FLAG |MODIFIED:20000(131072) field6: 44d5 field7: 200 bytes
I was hoping for an app I'm developing that a user could simply fill in a field for where certain sourcetypes for the app will be ingested from, because it will vary from user to user depending on wh... See more...
I was hoping for an app I'm developing that a user could simply fill in a field for where certain sourcetypes for the app will be ingested from, because it will vary from user to user depending on where they store the data my app ingests locally. Is there a way I can configure this in setup.xml? Or do they always need to manually configure this in their inputs.conf?
Hi Team, I have started a MongoDB agent and from the logs below it states it is registered successfully. But even after several minutes, it is not displayed in the controller nor it shows any logs ... See more...
Hi Team, I have started a MongoDB agent and from the logs below it states it is registered successfully. But even after several minutes, it is not displayed in the controller nor it shows any logs or error after this. I tried restarting the agent, still not able to see the agent displayed in the controller. I'm using a trial license. [Log files redacted] Thanks, Mahudees ^ Post edited by @Ryan.Paredez to remove log files. Please do not share or attach log files to community posts for security and privacy reasons.
I'm totally new to splunk, I have this JSON file already indexed: {"EventType":2,"EventData":{"Values":[{"Status":1,"Name":"BOT1"},{"Status":0,"Name":"BOT2"},{"Status":0,"Name":"BOT3"},{"Status":1... See more...
I'm totally new to splunk, I have this JSON file already indexed: {"EventType":2,"EventData":{"Values":[{"Status":1,"Name":"BOT1"},{"Status":0,"Name":"BOT2"},{"Status":0,"Name":"BOT3"},{"Status":1,"Name":"BOT4"}],"Subject":"Resource Online Status","Source":"Dashboard"}} I need to create a table which contains the Values in separate columns like this: ID STATUS RESOURCE 1 1 BOT1 2 0 BOT2 3 0 BOT3 4 1 BOT4 I'm trying the following: index="main" resource online Status | table "EventData.Values{}.Name" "EventData.Values{}.Status" | sort -_time asc | head 1 But it gives me this: ID EventData.Values{}.Name EventData.Values{}.Status 1 BOT1 BOT2 BOT3 BOT4 1 0 0 1 How can I combine the two columns to generate the desired format? Thank you!
Hi, I have an all in one splunk enterprise environment with only Universal Forwarders. My requirement is to send all logs in raw format to a third party syslog server. I know i cannot configur... See more...
Hi, I have an all in one splunk enterprise environment with only Universal Forwarders. My requirement is to send all logs in raw format to a third party syslog server. I know i cannot configure syslog forwarding from UF and its only possible through HF. Is there anyway i can forward raw syslog directly from the Splunk Enterprise instance. thanks in Advance.
Hello how can i configure search heade cluster with ansible and kubernetese ? this is my configuration : splunk-chart: namespace: dev-aviation-01 persistence: search: ... See more...
Hello how can i configure search heade cluster with ansible and kubernetese ? this is my configuration : splunk-chart: namespace: dev-aviation-01 persistence: search: dataSize: 50Gi configSize: 10Gi master: dataSize: 50Gi configSize: 10Gi indexer: dataSize: 250Gi configSize: 10Gi app: configs: enabled: true ## The image must contain 'indexer','master', and 'search' dirs in /data image: repository: gcr.io/argussec1/splunk-aviation-configs tag: 2.3.0 env: - name: SPLUNK_BEFORE_START_CMD value: sudo rm /opt/splunk/var/lib/splunk/kvstore/mongo/mongod.lock indexer: replicas: 1 resources: requests: memory: 4Gi cpu: 1 limits: memory: 8Gi cpu: 4 # default configuration loaded by splunk, exposed by nginx splunkDefaults: defaultYml: ansible_post_tasks: null ansible_pre_tasks: null config: baked: default.yml defaults_dir: /tmp/defaults env: headers: null var: SPLUNK_DEFAULTS_URL verify: true host: headers: null url: null verify: true max_delay: 60 max_retries: 3 max_timeout: 1200 hide_password: false retry_num: 50 shc_bootstrap_delay: 30 splunk: admin_user: admin allow_upgrade: true app_paths: default: /opt/splunaviationtc/apps deployment: /opt/spaviationk/etc/deployment-apps httpinput: /opt/splaviation/etc/apps/splunk_httpinput idxc: /opt/splunk/eaviationmaster-apps shc: /opt/splunk/etaviationhcluster/apps enable_service: false exec: /opt/splunk/bin/splunk group: splunk hec_disabled: 0 hec_enableSSL: 0 hec_port: 8088 hec_token: ea `` home: /opt/splunk http_enableSSL: 0 http_enableSSL_cert: null http_enableSSL_privKey: null http_enableSSL_privKey_password: null http_port: 8000 idxc: enable: false label: idxc_label replication_factor: 3 replication_port: 9887 search_factor: 3 secret: T ignore_license: false license_download_dest: /tmp/splunk.lic nfr_license: /tmp/nfr_enterprise.lic opt: /opt password: "" #overriden in the environment variables pid: /opt/splunk/var/run/splunk/splunkd.pid s2s_enable: true s2s_port: 9997 search_head_cluster_url: null secret: null shc: enable: false label: shc_label replication_factor: 3 replication_port: 9887 secret: C smartstore: null svc_port: 8089 tar_dir: splunk user: splunk wildcard_license: false conf: server: directory: /opt/splunk/etc/system/local content: clustering: summary_replication : true splunk_home_ownership_enforcement: true but i don't see any cluster or even more than 1 SH... what am i missing ?
Hi every one. I want to show device names and their status (connected / disconnected) on the map. The color of points should be changed based on their status (green for connected and red for disco... See more...
Hi every one. I want to show device names and their status (connected / disconnected) on the map. The color of points should be changed based on their status (green for connected and red for disconnected). If I show the device name, I could not set the color based on status. If i set the color by status, I could not show the names on the map. Appreciated for any help.
I am learning Splunk and i can see there are two common ways regex is being used for generating fields. Either using the rex command or the field extractions technique or via rex SPL command. I am wo... See more...
I am learning Splunk and i can see there are two common ways regex is being used for generating fields. Either using the rex command or the field extractions technique or via rex SPL command. I am wondering if there is a benefit for using the regex extraction over the rex SPL. In my view this is not efficient as the regex extraction will do the regex on all logs coming in and the rex command only of the SPL range used which uses up less resources ? Am i wrong thinking this way ? Can you explain me why ? Thanks in advance!
Hello, I have created an custom add-on to pull events from 'Teachworks' API. But, as per my config (refer screenshot), duplicate records are created whenever the API call try pull the new events.... See more...
Hello, I have created an custom add-on to pull events from 'Teachworks' API. But, as per my config (refer screenshot), duplicate records are created whenever the API call try pull the new events. Example: Run 1: 5 records available. 5 records pulled into Splunk Run 2: 5 records available. 0 records pulled into Splunk Run 3: 6 records available. 6 records pulled into Splunk I expect only 1 record (new entry) to be pulled into Splunk during 'Run 3', not all 6 records. Any assistance will be helpful.
I want to learn splunk. How can I setup splunk on my home WiFi network to learn and practice? I have Verizon router. 1-Laptop Windows 10 1-Laptop dual boot-Win 10/Kali Linux 2-Desktop PCs ... See more...
I want to learn splunk. How can I setup splunk on my home WiFi network to learn and practice? I have Verizon router. 1-Laptop Windows 10 1-Laptop dual boot-Win 10/Kali Linux 2-Desktop PCs with Windows 10 Thanks in advance!
Hi All, I am trying to build the query to get the website hits for each IP, there are 16 servers ip and wanted to get the traffic served by each ip in every 15 mints. I have schedule script runnin... See more...
Hi All, I am trying to build the query to get the website hits for each IP, there are 16 servers ip and wanted to get the traffic served by each ip in every 15 mints. I have schedule script running in every 15 mints and writing below details in logs. Can you help how i can extract the hit counts for all ip and show for timechart span=15m. Below is the log file capturing the ip and hit counts every 15 mints. 10.83.49.14 25155 10.83.49.17 21461 10.83.49.18 32736 10.83.49.21 15529 10.83.49.19 19987 10.83.49.20 16751 10.183.49.14 27953 Thanks, Ajay
Hi everyone Someone who has used the map command who can help me, I am trying to bind the username of the 12 hours before the first search, but the result does not give any value This is my quer... See more...
Hi everyone Someone who has used the map command who can help me, I am trying to bind the username of the 12 hours before the first search, but the result does not give any value This is my query, maybe I'm doing something wrong host=10.10.10.30 direction=in earliest=-15m latest=-1m| stats count by src_ip | map search="host=10.10.10.30 earliest=-12h latest=-15m src_ip=$src_ip$ username=*"
I wonder if the activity of deleting audit events from Splunk cloud will be logged/tracked in Splunk internal logs, e.g. logged as sourcetype of splunk_ui_access. If so, is there an official document... See more...
I wonder if the activity of deleting audit events from Splunk cloud will be logged/tracked in Splunk internal logs, e.g. logged as sourcetype of splunk_ui_access. If so, is there an official document that clearly states this? Is there any other evidence that someone deleted audit events?
Hi Splunkers, I need a custom adaptive response and ı read this documentation. "https://dev.splunk.com/enterprise/docs/developapps/enterprisesecurity/adaptiveresponseframework/exampleadaptiveres... See more...
Hi Splunkers, I need a custom adaptive response and ı read this documentation. "https://dev.splunk.com/enterprise/docs/developapps/enterprisesecurity/adaptiveresponseframework/exampleadaptiveresponse/" I did what it says on this page. Also I try add-on builder then resuit is same, Don't show any custom adaptive response on incident review page . I also cannot include this app in any correlation search alert action. My app name is TA-tck. Splunk version 7.2.5.1 Splunk ES version 5.2.0 Also "supports_adhoc": true in alert_actions.conf file. ı restarted splunk. How I fix this, thank you for all.
Using the REST api, I am currently retrieving a set of events from Splunk and extracting all of the field names and log sources, simultaneously building a map of log sources and fields belonging to t... See more...
Using the REST api, I am currently retrieving a set of events from Splunk and extracting all of the field names and log sources, simultaneously building a map of log sources and fields belonging to them. Is there any way that I can retrieve this data with a minimal payload? For example, if I pull back 1 record that is from LogSource1 and has Property1 equal to [some really long string], I really don't want that whole string back. I just need to consume LogSource1 and Property1. I'm open to any ideas.
Hello, I have a Splunk query which generates some output so I want to send this output to Grafana/premethes. So what are the steps to be followed to achieve this?
Hi all, I would like to have a dashboard with these 3 columns; 1. Name of the "Country" 2. "Status" column, which will not have any value but cells will have fill color according of the value o... See more...
Hi all, I would like to have a dashboard with these 3 columns; 1. Name of the "Country" 2. "Status" column, which will not have any value but cells will have fill color according of the value of "Info" column a) If Info column has "Batch has been executed with data" >> Fill color of the cell will be Green b) If Info column has "Batch has been executed with no data" >>Fill color of the cell will be Yellow c) If Info column has "Batch has not been executed" >> Fill color of the cell will be Red 3. "Info" column with three possibles values as describe above Column 1 and 3 are not a problem as they come from the data. But I am facing issues with the configuration of Status column. I did some research in "Splunk answers" space and there are similar cases that propose using javascript and css file. I tried to adapt those codes but as I am not an expert on .js and .css i did not succeed. I would really appreciate if someone can write the code of .js and .css for me. Thanks in advance for helping me, I did mi best but it was not possible. I don't really know if it could be possible at the end o I have other ways
so the case goes as such , I am only able to push btw 55-60EPS(Events per seconds) into an index via TCP port "5000" During load test events as high as 120 > Events/secs are generated then pus... See more...
so the case goes as such , I am only able to push btw 55-60EPS(Events per seconds) into an index via TCP port "5000" During load test events as high as 120 > Events/secs are generated then pushed into single instance of splunk server(No clusters) in real-time. fortunately Splunk server is able to receive the volumes of events between 55-60 EPS without hassle and the time to "open tcp" connection "send event" and "Close connection" is observed to be <300-400 millisecond, the unfortunate observation here is when the EPS is above 60EPS there is drastic increase in response time to receive these events upto 14 seconds thus limiting the to EPS a splunk server at the TCP port to handle only 55-60EPS. Well in assumption the the local port connection are exhausted i have tried but was unsuccessful. 1. decreased TCP Keep alive to 60 from 7200 sudo sysctl -w net.ipv4.tcp_keepalive_time=60 2. increased ports using : sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535" Configuration of the splunk server Hardware 16 core 64 GB OS: Ubuntu Licence type: enterprise. Utilization during 60 EPS was < 20 % Is there any configuration that i can alter and where to ensure the splunk server could scale and cater more than 60 EPS via the tcp port ?? do revert if you need any further clarification, your response to resolving my concern is gravely appreciated .
I'm trying to count values of field in a time chart with every particular point of time using dedup. like this , index = internal field1 = asterisk field2 = asterisk field3 = asterisk | dedup f... See more...
I'm trying to count values of field in a time chart with every particular point of time using dedup. like this , index = internal field1 = asterisk field2 = asterisk field3 = asterisk | dedup field3 | time chart count(field3) by field2 but it is giving only total count of (field3) in a row. I want total count at every particular point of time to display in time chart. sorry for typos
we on-boarded an application recently, Now we are seeing there are 100K aggregation issues(Log level= WARN) and 30k timestamp issues(Log Level=WARN) yesterday from one source, we are monitoring that ... See more...
we on-boarded an application recently, Now we are seeing there are 100K aggregation issues(Log level= WARN) and 30k timestamp issues(Log Level=WARN) yesterday from one source, we are monitoring that source from last 10 days. we have similar events and formatting. The maximum number of events coming from that source is not more than 5k per day Do i need to ignore these Warnings? What will cause these issues? will it affect our environment? I don't know where to start looking from.. Can some one help! Thank you for support Splunkers!!!