All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello experts, I am working on a stats of meetings.  As the attached photo shows, this meeting lasts for 7 (duration_hour) hours which starts at 8 AM (date_hour). I need to duplicate this event 7 ... See more...
Hello experts, I am working on a stats of meetings.  As the attached photo shows, this meeting lasts for 7 (duration_hour) hours which starts at 8 AM (date_hour). I need to duplicate this event 7 times by adding 1 hour at date_hour with each time. The final result I want is : date_hour _time The rest fields 8 10/29/2020 8:00 same 9 10/29/2020 9:00 same 10 10/29/2020 10:00 same 11 10/29/2020 11:00 same 12 10/29/2020 12:00 same 13 10/29/2020 13:00 same 14 10/30/2020 14:00 same   Looking forward to your answers, thank you.
I recently transitioned to a new instance of Splunk and have been having some trouble configuring the new environment.  I have 8 remote Windows hosts with identical forwarders (inputs.conf and outpu... See more...
I recently transitioned to a new instance of Splunk and have been having some trouble configuring the new environment.  I have 8 remote Windows hosts with identical forwarders (inputs.conf and outputs.conf are the same on all 8). 5 of the hosts are forwarding information correctly but 3 are reporting: "The TCP output processor has paused the data flow". After reviewing some of the similar posts, it seems this error message is generic and just says there's nothing wrong with the forwarder, but there's something wrong with the server.  I reviewed the monitoring console but didn't see anything that would indicate the server or index not having the ability to receive more information. 
I have a Phantom playbook that will take security-related actions on any arbitrary host on my network. These actions might need to be taken at any time of day, on weekends, holidays, etc., so I need ... See more...
I have a Phantom playbook that will take security-related actions on any arbitrary host on my network. These actions might need to be taken at any time of day, on weekends, holidays, etc., so I need to make sure any member of my 24/7 security operations center can run the playbook. I'm looking for a way they can initiate the playbook without explicitly logging into Phantom. Is there a way that a Splunk dashboard can start a Phantom playbook, after accepting the information required for that playbook (hostname, user ID assigned to that host, etc.)?  
Hello all, We are trying to build new indexer cluster with new cluster master. We installed splunk on all the servers and integrated indexers with the cluster master. After all the process we are g... See more...
Hello all, We are trying to build new indexer cluster with new cluster master. We installed splunk on all the servers and integrated indexers with the cluster master. After all the process we are getting search and replication factor errors with below warning messages. We check all the ports connectivity between all the indexers and the cluster master everything is connected but we still getting this warning. We tried cleaning up the eventdata as suggested in one of the posts but that did not work either. Please let me know if anyone faced this type of issue and resolved it that would be very helpful. Let me know if you need any more info. We have search and replication factor = 2 with three indexers Search peer abcd.com has the following message: Too many bucket replication errors to target peer=xx.xx.xx.xx:8080. Will stop streaming data from hot buckets to this target while errors persist. Check for network connectivity from the cluster peer reporting this issue to the replication port of target peer. If this condition persists, you can temporarily put that peer in manual detention. Thanks.
hi guys, forgive the n00bness of this question as im sure its fairly straightforward and/or been answered before. So im just in the process of rolling out Splunk to the business. One of the key req... See more...
hi guys, forgive the n00bness of this question as im sure its fairly straightforward and/or been answered before. So im just in the process of rolling out Splunk to the business. One of the key requirements is parsing our Nginx logs. Now im able to do this easily from a standard Linux box using a deployment server. However, all our websites are moving to Kubernetes so im wondering what's the best way that i can get the data from the nginx containers/pods to Splunk. Obviously adding the UF to the docker image for every microservice would be overkill as i want to keep the images as lean as possible. Am i right thats not the best way to do this?   Thanks!
Hi there, I did already several trials with search commands like "eval _time=strptime(time,"%Y-%m-%dT%H:%M:%S")" but was not able to manage that time chart is using my own timestamp in the message ... See more...
Hi there, I did already several trials with search commands like "eval _time=strptime(time,"%Y-%m-%dT%H:%M:%S")" but was not able to manage that time chart is using my own timestamp in the message uploaded. The data has originally same _time as the upload is done by bulk upload of a log file that contains then several events with there own timestamp in the format:  time: 2020-11-16T15:15:51.113394  I'd like to show cpu_percentage over my own time stamp. Hope somebody can help out here Ralf  
We have a search that populates a csv file for tracking purposes of latest check-ins formatted as (%m/%d/%Y) Host agent1_date agent2_date agent3_date agent4_date agent5_date Asset1 11/16... See more...
We have a search that populates a csv file for tracking purposes of latest check-ins formatted as (%m/%d/%Y) Host agent1_date agent2_date agent3_date agent4_date agent5_date Asset1 11/16/2020 11/15/2020 11/16/2020 11/13/2020 11/16/2020 Asset2 11/15/2020   11/13/2020 11/13/2020 11/13/2020 How do I go about on comparing all these dates and get the latest value and write it to a new column - latest date Expected outcome: Host agent1_date agent2_date agent3_date agent4_date agent5_date latest_date Asset1 11/16/2020 11/15/2020 11/16/2020 11/13/2020 11/16/2020 11/16/2020 Asset2 11/15/2020   11/13/2020 11/13/2020 11/13/2020 11/15/2020  
I have below 3 different set of events coming from same source. So i have extracted the field using rex command for each type of event. This is working fine when i use each 'rex' command separately. ... See more...
I have below 3 different set of events coming from same source. So i have extracted the field using rex command for each type of event. This is working fine when i use each 'rex' command separately. But when i combine all 3 rex commands, it's giving me 0 results. Is there any way to fix this ? host01u,UAT,2300970,app.rmkb.hk-122,,Deployment Success host01u,UAT,2319971,app.bww.label-34,HOLD,Deployment Success host02u,UAT,2319237,app.static-540,No_File My Query: index=foo source=status.list | rex field=_raw "(?<Server>\w+.*)\,(?<Environment>\w+.*)\,(?<Req>\d+.*)\,(?<Package>\w+.*)\,(?<Command>)\,(?<Deploy_Status>\w+.*)" | rex field=_raw "(?<Server>\w+.*)\,(?<Environment>\w+.*)\,(?<Req>\d+.*)\,(?<Package>\w+.*)\,(?<Command>\w+.*)\,(?<Deploy_Status>\w+.*)" | rex field=_raw "(?<Server>\w+.*)\,(?<Environment>\w+.*)\,(?<Req>\d+.*)\,(?<Package>\w+.*)\,(?<Deploy_Status>\w+.*)" | stats latest(*) as * by Server,Environment,Package | table Server,Environment,Req,Package,Deploy_Status | dedup Server,Environment,Req,Package,Deploy_Status | stats count by Deploy_Status
Hello, I will try to describe the situation first; my problem and then ask you my question : This my architecture : 6 stormshield firewalls (one per remote site). 6 rsyslog/forwarders (one per r... See more...
Hello, I will try to describe the situation first; my problem and then ask you my question : This my architecture : 6 stormshield firewalls (one per remote site). 6 rsyslog/forwarders (one per remote site). The rsyslog/forwarders gather logs from /var/log/rsyslog/stormshield/%FROMHOST%/stormshield.log The rsyslog/forwarders send logs to indexers with sourcetype = stormshield and source=/var/log/rsyslog/stormshield/%FROMHOST%/stormshield.log The "host" field is the "%FROMHOST%" folder (defined by the hostname of the firewall) My problem is : The "host" field is not normalized because sometime the hostname is an IP address or the DNS name. I can't change hostname of my firewall because lot of things related to their hostname. I need to use the "host" field because it it used in lot of secruity dashboards. My question is : can I normalized the "host" field by renaming the firewalls somewhere in Splunk ? and how can I do it ? I want to have the "host" coresponding to my new names. Exemple 1 : For the firewall XX.XX.XX.1 (old "host" field) the "host" field must be ABC-001 Exemple 2 : For XX.XX.XX.19, the "host" field must be ABC-019 instead of XX.XX.XX.19, etc. Thanks Splunkers, Regards.  
Hi,  I am trying to create a service using a service template, so that I can use the same KPIs( in the template) against the set of hosts that is matching my entity rules in my newly created service... See more...
Hi,  I am trying to create a service using a service template, so that I can use the same KPIs( in the template) against the set of hosts that is matching my entity rules in my newly created service.  However, when I am trying to view this service from a service analyzer, I see my service in grayed out and NO entities are being populated against the KPIs in my service. But other services ( previously created by someone else) using the same service template is working fine.  Here are the below troubleshooting steps that I checked already - 1. Opened the KPI searches separately in a search window and have searched against my entities separately by adding '| search host= "my entitryname''  ' And it is returning values.    2. Opened some of the matched entities in the Entities page. I checked the 'Assigned Service(s)' setting from the Edit Entity option.  I can see other ( previously created) by someone else) services are assigned with the entities. But My service is not listed there against the entities.  Though I can see the entities listed from the entity rule option when I open the services.  Please help to solve this.  I am very new to ITSI and trying  to learn ITSI. So please pardon, if this is a very basic question that I am struggling with. 
Hi. I have an alert that'll tell me if a host is down, and it runs for both Active and Standby hosts. The issue is that when the standby host hasn't received a log,  I'd like to run a search to... See more...
Hi. I have an alert that'll tell me if a host is down, and it runs for both Active and Standby hosts. The issue is that when the standby host hasn't received a log,  I'd like to run a search to see if the active host has received a log in the last 24 hours, and if so to ignore it. I can run a search for all IPs, but what I cant seem to do is see if 198.0.0.2 is down, to check for 198.0.0.1 (the Active is always -1 from Standby) I thought something like this might work, but no. index=* host=* [search index="*" host=198.0.0.2 |rex field=host "(?<Net>\d+\.\d+\.\d+)\.(?<Host>\d+)" |eval Host2 = (Host-1) |eval newhost= Net. "." .Host2 |fields newhost] |where host=newhost any and all help appreciated
I have a JSON input with different types, all representing a data point at a certain time. I have the start time of the event and am looking for a way to get all the data organised without having to ... See more...
I have a JSON input with different types, all representing a data point at a certain time. I have the start time of the event and am looking for a way to get all the data organised without having to revert to custom Python code - how do I do this? The 'time' type in JSON depicts how many seconds since the start time a data point is happening across all these types. Ideally I'm looking for something that looks like this based on the data below assuming the start time is 12:00:00, probably latlng even split but that's secondary for now: _time distance altitude latlng 12:00:00 3.8 51.5 -33.895627, 151.228228 12:00:01 5.2 51.6 -33.895627, 151.228228 12:00:03 6.7 51.5 -33.895627, 151.228228 12:00:04 8.9 51.5 -33.895627, 151.228228     [ { "type": "time", "data": [ 0, 1, 3, 4 ] }, { "type": "distance", "data": [ 3.8, 5.2, 6.7, 8.9 ] }, { "type": "altitude", "data": [ 51.5, 51.6, 51.5, 51.5 ] }, { "type": "latlng", "data": [ [ -33.895627, 151.228228 ], [ -33.895627, 151.228228 ], [ -33.895627, 151.228228 ], [ -33.895627, 151.228228 ], ] }, ]      
Hello, I'm trying to compare latest data with seven days back data. I want to create column charts in dashboard , one chart with today's data , other one with 7 day's back data. for example , if i... See more...
Hello, I'm trying to compare latest data with seven days back data. I want to create column charts in dashboard , one chart with today's data , other one with 7 day's back data. for example , if i select last 15mins from drop down, both panels should display data for same time interval (one panel with today's date and other one with 7 days back date) is this possible.      
Hi all Splunk Google add-on 3.0.2 input fails with exception for 7.2.3 the docs state that the add-on is supported for 7.2.3: https://docs.splunk.com/Documentation/AddOns/released/GoogleCloud/Relea... See more...
Hi all Splunk Google add-on 3.0.2 input fails with exception for 7.2.3 the docs state that the add-on is supported for 7.2.3: https://docs.splunk.com/Documentation/AddOns/released/GoogleCloud/Releasenotes We have a case open, but splunk state upgrading to python 3? however in server.conf  of 7.2.3 there is no option in the spec file to add the python switch.  So even if we install python 3 , how do we force splunk to use it?( alternatively if will mean having to test other add-ons for compatibility also , there is the migration app I am aware.) https://docs.splunk.com/Documentation/Splunk/7.2.3/Admin/Serverconf below setting is not available in 7.2.3  spec , it is quoted in 8.x [general] python.version = {python2|python3|force_python3} * For Python scripts only, sets the default Python version to use. * Can be overridden b Sample of errors for inputs.      2020-11-12 13:41:01,373 level=INFO pid=62379 tid=MainThread logger=splunksdc.collector pos=collector.py:run:248 | | message="Modula r input started." 2020-11-12 13:41:06,813 level=ERROR pid=62379 tid=MainThread logger=splunk_ta_gcp.modinputs.pubsub pos=utils.py:wrapper:68 | start_t ime=1605188461 datainput="Prod_input_testing" | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunksdc/utils.py", line 66, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/pubsub.py", line 489, in run return handler.run(subscriptions) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/pubsub.py", line 383, in run return self._run_consumer(subscriptions[0]) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/pubsub.py", line 393, in _run_consumer consumer.run() File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/pubsub.py", line 189, in run parcel = agent.pull() File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/pubsub.py", line 135, in pull self._ttl = self._get_acknowledge_deadline() File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/pubsub.py", line 168, in _get_acknowledge_de adline content = response.json() File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/requests/models.py", line 897, in json return complexjson.loads(self.text, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/simplejson/__init__.py", line 516, in loads return _default_decoder.decode(s) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/simplejson/decoder.py", line 374, in decode obj, end = self.raw_decode(s) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/simplejson/decoder.py", line 404, in raw_decode return self.scan_once(s, idx=_w(s, idx).end()) JSONDecodeError: Expecting value: line 1 column 1 (char 0)       Wondering if anyone had had similar experience.. note that the migration tool in the app fails due to future package not being available ( that is a separate issue..) it needs the future package of python 3
Hi I'm trying to get the username and password of the user calling a python script from the search bar in the Splunk UI. I need this to log into smtp to send an email (smtp.login(username, password)... See more...
Hi I'm trying to get the username and password of the user calling a python script from the search bar in the Splunk UI. I need this to log into smtp to send an email (smtp.login(username, password)). I need to make use of SCPv2, so the  results,dummyresults,settings = splunk.Intersplunk.getOrganizedResults()  route is not an option. I can get the authenticated session connection via the self object (self.service). I though I should be able to get the username and password using "storage_passwords", however when I use that and output the username and password to the logger, I see the following: Username:Windows_Usage``splunk_cred_sep``2 Password:``splunk_cred_sep``S``splunk_cred_sep``P``splunk_cred_sep``L``splunk_cred_sep``U``splunk_cred_sep``N``splunk_cred_sep``K``splunk_cred_sep`` It looks like the username and password is encrypted in some way? If I try to use those credentials, I get a "[HTTP 401] Client is not authenticated" error. Looking at the capabilities o the user, I see that "list_storage_passwords" is included. Any ideas on how I can get the username and password? If I hardcode the username and password everything works, but I do not like to have passwords in script files.
Hi, We've trying out the Splunk App for Infrastructure, to see if this adds any value to out existing deployment. We have already deployed UFs, and are already collecting some metrics data, so we th... See more...
Hi, We've trying out the Splunk App for Infrastructure, to see if this adds any value to out existing deployment. We have already deployed UFs, and are already collecting some metrics data, so we though it would be possible to use this existing metric data in the app, but we cant figure out how to make it work. We have updated the macro "sai_metrics_indexes" to include our custom metric index, but still, when looking at the "Investigate" dashboard of the app, there are no entities listed. The table shows some of the metric data that we have from Kubernetes, using the following command.   | mstats avg(*) where index=metric-kub by host   host avg(kube.container.cpu.limit) avg(kube.container.memory.request) avg(kube.node.memory.utilization) avg(kube.pod.cpu.limit) avg(kube.pod.cpu.request) avg(kube.pod.memory.limit) avg(kube.pod.memory.request) kub-server01 25.8566 95.48 2675338291 39.3 73.9 57.4 145.2 kub-server02 25.8566 95.48 2758867115 39.3 73.9 57.4 145.2   What do we need to do to include this metric data in the Splunk App for Infrastructure? Do we have to input the data all over, using the exact configuration script from the "Add Data" tab in the app? Thanks!
Hi,  I am trying to figure out if i need a heavy forwarder or not; from what I have read in the documentation, a heavy forwarder is needed to be able to use addons slik splunk DB Connect, is that ... See more...
Hi,  I am trying to figure out if i need a heavy forwarder or not; from what I have read in the documentation, a heavy forwarder is needed to be able to use addons slik splunk DB Connect, is that correct? If so; can the heavy forwarder forward to a universal forwarder,  that forwards to Splunk Cloud? Heavy Forwarder --> Universal Forwarder (DMZ) --> Splunk Cloud I presume it is possible, but I just wanted to be 100% sure.  
I have setup Splunk server over LAN .  I can access  web interface on all machines in the LAN except 1 machine . Browser :  internet explorer 11 . Following are the details  .  Login page comes u... See more...
I have setup Splunk server over LAN .  I can access  web interface on all machines in the LAN except 1 machine . Browser :  internet explorer 11 . Following are the details  .  Login page comes up Authentication happens properly ( i.e. if I enter invalid user / password I get error message) After providing valid credentials I ‘m expecting to see Home page , but instead I remain on same login page     Following message is shown messages on IE 11's console . HTML1300  :  Navigation occurred  File-En/US  Do I need to update any configuration of IE 11. Regards, Yogesh
Hi all   I just want to know how can I be informed about  about security patches and release information. Is there any newsletter for that?   thanks  
Hi Is there a search command that will ignore the most recent X number of events for each day whilst using a Timechart command? Thanks