All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I try to add custom visualisation on my splunk enterprise, but I am not able to find add file option in manage app. Is there any role needed to get such option on my splunk?
Have problem with snmp polling. I have 140 servers, which cpu i must monitor, and use snmp polling for it. But from 140 servers, i get only 110-120. And after every restart of splunk, the number of... See more...
Have problem with snmp polling. I have 140 servers, which cpu i must monitor, and use snmp polling for it. But from 140 servers, i get only 110-120. And after every restart of splunk, the number of servers change between 110-120. For example if taking from server A smnp logs, after restart it may not work. And in splunkd.log getting: 05-20-2020 09:54:41.578 +0600 ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" obj.handle_error() 05-20-2020 10:16:31.405 +0600 ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" Exception with getCmd to x.x.x.x:161: poll error: Traceback (most recent call last): By the way, for one input I'm writing 3 hosts. And now have 47 data inputs.
Hi Team, Can we Ingest javascript logs from browser based apps in Splunk, can we do this through HEC token?
Hello I am having issues with my agent authentication and installation. I set up a service account on our domains. Created the group recommended from the documentation for users and computers. A... See more...
Hello I am having issues with my agent authentication and installation. I set up a service account on our domains. Created the group recommended from the documentation for users and computers. Added the permissions to the groups and gpo and applied them to the default domain controller policy. When I install it I under a domain account domain\username password Specify the logs and send them to the deployment server. Then I logged in the splunk dashboard and am not seeing the host.
I have been participating in Splunk Advanced Searching and Reporting course and there is one thing that is mentioned in the materials but not really explained anywhere that I've found so far. When... See more...
I have been participating in Splunk Advanced Searching and Reporting course and there is one thing that is mentioned in the materials but not really explained anywhere that I've found so far. When creating lispy, splunk will use lookups and field extractions from search time, presumably because they are in the knowledge bundle. However it will not use field aliases, according to the course materials. So three questions arise from this: 1. Why not aliases? Especially since these underpin the CIM to a great extent. 2. What other search time KOs are/are not evaluated when it's time to create lispy expressions. 3. Where is this documented?
I search the syntax and find Account_Domain result contains two column. How can I result first column so that I left ALPHA?? ALPHA
Got a cenario where timechart returned me a column named 'VALUE' where I don't have a value=VALUE in my logs as part of my by clause index=xpto | rename field as NormalizedField | stats count by ... See more...
Got a cenario where timechart returned me a column named 'VALUE' where I don't have a value=VALUE in my logs as part of my by clause index=xpto | rename field as NormalizedField | stats count by NormalizedField | join type=inner NormalizedField [ inputlookup table.csv] `coment("This table has 150000 rows with 1 column to make a filter on NormalizedField")` | timechart sum(count) as count span=60m by NormalizedField usenull=f useother=f limit=10 partial=f The results where something like this: _time | 3.4 | 3.5 | 3.8 | 3.8.2 | 3.9.0 | 3.9.1 | VALUE My Problem is why this "VALUE" column is there if my NormalizedField don't ever have this result? If I just do another stats instead of timechart I don't see this "VALUE" as a row for my NormalizedField. Any toughts?
I am trying to configure a new instance of splunk, my requirements for data retention are: Searchable 14 days Archive 5 years I have configured the indexes.conf as below for my index: coldtofr... See more...
I am trying to configure a new instance of splunk, my requirements for data retention are: Searchable 14 days Archive 5 years I have configured the indexes.conf as below for my index: coldtofrozendir = $SPLUNK_DB/defaultdb/frozendb frozentimeperiodinsecs = 1209600 According to the "Set a retirement and archiving policy" and "indexes.conf" documentation on splunk docs, the settings i've configured should roll the buckets to my frozen directory when the events are two weeks old and leave them there for me to handle. However - myself and the sales engineer are stumped as to why the events in the hot bucket are still over 3 months old. Have we read the documentation correctly? Your input is greatly appreciated. Thank you!
Hi, I have a Solaris 11 box, configured with Virtual NIC. I've installed splunk forwarder, but whenever I try to set port, or list forwarder, I get HTTP timed out: splunk@serverA:~$ /opt/splu... See more...
Hi, I have a Solaris 11 box, configured with Virtual NIC. I've installed splunk forwarder, but whenever I try to set port, or list forwarder, I get HTTP timed out: splunk@serverA:~$ /opt/splunkforwarder/bin/splunk set splunkd-port 6969 --accept-license Couldn't complete HTTP request: Connection timed out splunk@serverA:~$ /opt/splunkforwarder/bin/splunk list forward-server Couldn't complete HTTP request: Connection timed out IP filtering is disabled on this server: root@serverA:~# svcs ipfilter STATE STIME FMRI disabled May_14 svc:/network/ipfilter:default I'm able to connect to splunk server: splunk@serverA:~$ telnet splunk 6969 Trying 10.193.10.57... Connected to splunk.int.rfs.co.nz. Escape character is '^]'. I was able to setup splunk forwarder on another serverB, with same OS, and it completed without errors. The only difference between the 2 servers is that serverA has virtual NIC. Anyone who's encountered this issue? I checked the discussions, and it was pointing to firewall, but firewall is disabled on serverA. Thanks!
The v5.0.1 of the Splunk Add-on for AWS doesn't properly parse CloudFront access logs from IPv6 clients. You can see in the props.conf has this extraction: EXTRACT-cloudfront_web = ^\s*(?P<date>[... See more...
The v5.0.1 of the Splunk Add-on for AWS doesn't properly parse CloudFront access logs from IPv6 clients. You can see in the props.conf has this extraction: EXTRACT-cloudfront_web = ^\s*(?P<date>[0-9-]+)\s+(?P<time>[0-9:]+)\s+(?P<x_edge_location>[^\s]+)\s+(?P<sc_bytes>\d+)\s+(?P<c_ip>[0-9.]+) ... EXTRACT-cloudfront_rtmp = ^\s*(?P<date>[0-9-]+)\s+(?P<time>[0-9:]+)\s+(?P<x_edge_location>[^\s]+)\s+(?P<c_ip>[0-9.]+) ... Those patterns for c_ip will only match IPv4, not IPv6. As a result I don't get extracted fields on my access log lines that come from IPv6 clients.
I am an SC admin. When I have a user search |inputlookup , the user can see the results of the lookup. However, when the user loads a dashboard with the same lookup used in a panel, the i get: ... See more...
I am an SC admin. When I have a user search |inputlookup , the user can see the results of the lookup. However, when the user loads a dashboard with the same lookup used in a panel, the i get: Error in 'lookup' command: Could not construct lookup. When I access the dashboard I have no problems and the dashboard loads as planned. Why would the user be able to access the dashboard, and see the lookup by using |inputlookup, but when the lookup is in the dashboard, it fails for them.
I'm running into an issue where as a domain admin I cannot upgrade Splunk 7.2.9.1 to Splunk 8.0.3. I was able to upgrade 7.2.9.1 to 7.3.5, but still cannot upgrade from 7.3.5 to 8.0.3. The GUI (Win... See more...
I'm running into an issue where as a domain admin I cannot upgrade Splunk 7.2.9.1 to Splunk 8.0.3. I was able to upgrade 7.2.9.1 to 7.3.5, but still cannot upgrade from 7.3.5 to 8.0.3. The GUI (Windows) reports back after going through a lot of the steps, "Splunk Enterprise Setup Wizard ended prematurely because of an error. Your system has not been modified..."
Hello, Im new to splunk and just started learning it and im having little issues extracting some fields from a raw data ex: of Ram Data 04/12 15:50:38 [LOGON] [1860] Domain: SamLogon: Networ... See more...
Hello, Im new to splunk and just started learning it and im having little issues extracting some fields from a raw data ex: of Ram Data 04/12 15:50:38 [LOGON] [1860] Domain: SamLogon: Network logon of Domain\test1$ from machine1 Returns 0xC0000064 I would like to extract the following SamLogon : Network logon of Domain\test1$ from machine1 Returns : 0xC0000064 im trying to use the regex in props.conf in SH Any help would be appreciated . Thanks
Please assist with what appears to be a date/time parsing issue: Splunk Enterprise 7.3.1. Python version info -- $ /opt/splunk/bin/splunk cmd python Python 2.7.15 (default, Jun 24 2019, 17... See more...
Please assist with what appears to be a date/time parsing issue: Splunk Enterprise 7.3.1. Python version info -- $ /opt/splunk/bin/splunk cmd python Python 2.7.15 (default, Jun 24 2019, 17:39:18) [GCC 5.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. ERROR log events after setting up and restarting Splunk to collect data for a specified project ID: 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" Traceback (most recent call last): 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" File "/opt/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/modinput_wrapper/base_modinput.py", line 127, in stream_events 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" self.collect_events(ew) 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" File "/opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py", line 72, in collect_events 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" input_module.collect_events(self, ew) 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" File "/opt/splunk/etc/apps/TA-gitlab-add-on/bin/input_module_get_events.py", line 203, in collect_events 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" if (datetime.strptime(r_loop['created_at'], '%Y-%m-%dT%H:%M:%S.%fZ') 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" File "/opt/splunk/lib/python2.7/_strptime.py", line 332, in _strptime 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" (data_string, format)) 05-19-2020 14:10:46.781 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py" ValueError: time data '2020-02-24T12:48:08.608-05:00' does not match format '%Y-%m-%dT%H:%M:%S.%fZ'
I am getting the following when trying to set up my tenant in the 365 app: 500:POST splunk_ta_o365_Splunk_client_secret . It is at the configuration screen where it asked for name , endpoint... See more...
I am getting the following when trying to set up my tenant in the 365 app: 500:POST splunk_ta_o365_Splunk_client_secret . It is at the configuration screen where it asked for name , endpoint , client , tenant ID s, and the client secret. I know it is talking to 365 because when I purposefully put in a junk secret it tells me the secret is wrong, but when I put in the right secret I get that error instead. Has anyone else seen this?
I am trying to enable encryption of the traffic from all of my universal forwarders to the indexer. Looks like this involves updating the output.conf file on the forwarder (makes sense). No big de... See more...
I am trying to enable encryption of the traffic from all of my universal forwarders to the indexer. Looks like this involves updating the output.conf file on the forwarder (makes sense). No big deal but the only way I have ever configured that file is via our software deployment solution when I go to install the forwarder on a given machine. After that I never touch the file. I can use the same solution to do a simple copy and replace to each system, but was wondering if this can be done via the app deployment system built into Splunk, the same way I would configure any other config file in any deployed app? I could see why you would not want to do that through the deployment solution in case you mess up a config file and all your forwarders lose their ability to communicate back to the indexer after it updates. But, if you could do it then I just assume it might be as simple as creating a deployment app called something like "SplunkUniversalForwarder" and then dumping the config file in the local folder and it would take precedence over the local $SPLUNK_HOME/etc/system/local/outputs.conf file on the given forwarder. Would that work?
I am trying to figure out a way to calculate the time for: Time taken for a reviewer to assign the notable ticket from the creation time. Time taken for the notable in progress till close. ... See more...
I am trying to figure out a way to calculate the time for: Time taken for a reviewer to assign the notable ticket from the creation time. Time taken for the notable in progress till close. notable |search NOT suppression |eval _time=strftime(_time,"%Y/%m/%d %T") |eval review_time=strftime(review_time,"%Y/%m/%d %T") |eval assign_time = case(isnotnull(owner), _time) | eval close_time = case(status=5, review_time) |stats min(_time) as notable_time min(assign_time) as assign_time min(close_time) as close_time by AlertTitle,owner This is giving me notable created time and closed time, but not the state change time.
Splunk forwarder is re-indexing files when I edit them with vi. Has anybody seen this before? I have used the below configs, still it is re-indexing again and again. crcSalt =' <' SOURCE '>'... See more...
Splunk forwarder is re-indexing files when I edit them with vi. Has anybody seen this before? I have used the below configs, still it is re-indexing again and again. crcSalt =' <' SOURCE '>' initCrcLength = 2560 TIA for your help.
Hi Team, I have 10 APIs, which run on two distributed hosts, and I want to know the count of API calls on each of the hosts. So, I am looking for something like the following. API ... See more...
Hi Team, I have 10 APIs, which run on two distributed hosts, and I want to know the count of API calls on each of the hosts. So, I am looking for something like the following. API Host1 Host2 Order 10 12 Product 20 20 Can someone please help me with this? Thank you so much.
I am using "Enhanced Timeline" app for showing some milestones. Actually I have to show the milestones for 6 projects in a single panel. But the problem is the milestones are showing in UP/Down to ea... See more...
I am using "Enhanced Timeline" app for showing some milestones. Actually I have to show the milestones for 6 projects in a single panel. But the problem is the milestones are showing in UP/Down to each other, I want to show in a single X-Axis line. How can I do that, is there any way by changing the CSS we can achieve this. ? Attached is the image showing the issue ? Once I can show in one line, I can append queries with different project milestone dates and show all project milestone status in single panel ? That is what my understanding is - if there is other way possible please suggest ? Also suggest how can I show 5 projects milestone status in a single panel, as each horizontal line will represent each project status ? Please help this is bit urgent. Thanks. My sample milestone.csv file has following fields : Event Description,start,end,class Milestone-1,10/01/2018,11/01/2018,green Milestone-2,11/02/2018,02/01/2019,green Milestone-3,02/02/2019,04/01/2019,orange Milestone-4,04/02/2019,07/01/2019,orange Milestone-5,07/02/2019,08/01/2019,red