All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do I go about editing the data have the data from umbrella dns logs update the network resolution dns data model
I am facing following error while trying to collect logs from gitlab add on. Can anyone help me disable it. Changing the verify=True to False tor Http request function in base_modinput.py did not hel... See more...
I am facing following error while trying to collect logs from gitlab add on. Can anyone help me disable it. Changing the verify=True to False tor Http request function in base_modinput.py did not help as suggested on other similar question on this issue. 2020-03-31 11:33:18,582 ERROR pid=793 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/get_events.py", line 72, in collect_events input_module.collect_events(self, ew) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/input_module_get_events.py", line 166, in collect_events headers=headers) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/modinput_wrapper/base_modinput.py", line 476, in send_http_request proxy_uri=self._get_proxy_uri() if use_proxy else None) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/splunk_aoblib/rest_helper.py", line 43, in send_http_request return self.http_session.request(method, url, **requests_args) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/requests/sessions.py", line 488, in request resp = self.send(prep, **send_kwargs) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/requests/sessions.py", line 609, in send r = adapter.send(request, **kwargs) File "/base/splunk/etc/apps/TA-gitlab-add-on/bin/ta_gitlab_add_on/requests/adapters.py", line 497, in send raise SSLError(e, request=request) SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:741)
Hello, I am currently working on a query / report that displays MFA information for users in my AWS organizations. The table is as follows: account_id | UserName | AccessKeyMetadata{}.Ac... See more...
Hello, I am currently working on a query / report that displays MFA information for users in my AWS organizations. The table is as follows: account_id | UserName | AccessKeyMetadata{}.AccessKeyId |Days Since Last Login | MFA Present |MFA Detail I'm looking to pull the age of the AccessKeyID but am having trouble. Any suggestions? I am currently using the stats command to pull all current MFA related info: | stats latest(days_since_login) as "Days Since Last Login", latest(mfa_present) as "MFA Present", latest(mfa_detail) as "MFA Detail" by account_id, UserName, AccessKeyMetadata{}.AccessKeyId Ideally -- I would like to pull the age of AccessKeyID. Any help would be greatly appreciated. Thanks, Kiran
Is there a way I can keep track of the 500MB limit on the Free Splunk to where I can stop Indexing when I get close to 500MB?
Hi, I'm deploying splunk stream with an independent forwarder and an external NFS share +(two indexers+search head splunk_stream app). The objective was to capture and reference http flow of a spec... See more...
Hi, I'm deploying splunk stream with an independent forwarder and an external NFS share +(two indexers+search head splunk_stream app). The objective was to capture and reference http flow of a specific web application (for traceability purpose). due to that, the main concern will be the ability to download pcap files but only indexing small events (_time, src_ip, desti_ip, pcap-url). However, in my tests, i couldn't find any pcap-url or path fields in my http flow events (stream:http). How can i handle this? Best Regards
Hi All, I have recently LDAP connection on one of our HF, it's giving us the error " ERROR ScopedLDAPConnection - strategy="my_LDAP" Error binding to LDAP. reason="Inappropriate authentication" ... See more...
Hi All, I have recently LDAP connection on one of our HF, it's giving us the error " ERROR ScopedLDAPConnection - strategy="my_LDAP" Error binding to LDAP. reason="Inappropriate authentication" , I have used same configuration across other Splunk components, I am not seeing the error there, also I am able to login successfully using the LDAP on those instances. Could anyone suggest a solution please ? @woodcock @richgalloway
we have four indexer in cluster and master to mange .. i tride to push the parse file from the master but not able to see the fields after the push. can any one help me how to do push the parse file?
Hey everyone. First time SSL setup (IDX & UF both v8.x) and cert creation (never done before). Had a question about verifying if things worked. I walked through splunk docs and got to the point of v... See more...
Hey everyone. First time SSL setup (IDX & UF both v8.x) and cert creation (never done before). Had a question about verifying if things worked. I walked through splunk docs and got to the point of verifying connection. https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Validateyourconfiguration On my IDX i can run: index=_internal source=*metrics.log* group=tcpin_connections | dedup hostname | table _time hostname version sourceIp destPort ssl I do get expected result returned port (9998) and SSL = true. BUT this was the only part of the instructions that i could follow to verify SSL items. I can't find anything else within Splunkd or index=_* referencing what is being mentioned in the validating procedures for the UF portion. I can see that the UF has the following output in splunkd "Connected to idx=10.202.20.229:9998". This UF is only setup to forward to the one server over port 9998. The next steps i followed where on this page https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Troubleshootyouforwardertoindexerauthentication openssl s_client -connect <server>:<port> and the expected output mentioned Verify return code: 0 (ok) was instead returning code: 18 (self signed certificate). Then on the UF i attempted to monitor a random log file that i update. On the main Splunk server i can see the data come in. I'm just second guessing if i did things correctly given i wasn't able to validate internal Splunk logs. UF server.conf [sslConfig] sslRootCAPath = /opt/splunkforwarder/etc/auth/myCerts/myCACertificate.pem sslPassword = <with pass here> outputs.conf [SSL] [tcpout] defaultGroup = group1 [tcpout:group1] server = 10.202.20.229:9998 disabled = 0 clientCert = /opt/splunkforwarder/etc/auth/myCerts/myNewSplunkForwarderCert.pem useClientSSLCompression = true sslPassword = <with pass here> IDX inputs.conf [splunktcp-ssl:9998] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/myCerts/myNewSplunkIndexerCert.pem sslPassword = <with pass here> requireClientCert = "true" server.conf [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/myCerts/myCACertificate.pem sslPassword = <with pass here>
I have below output from the splunk querry. Hostname INC Number Urgency Time_CST Description 1 CMPS3 INC000013 3-Medium 03/31/20 09:22:31 2 USBTNBTRF INC000014 3-Medium ... See more...
I have below output from the splunk querry. Hostname INC Number Urgency Time_CST Description 1 CMPS3 INC000013 3-Medium 03/31/20 09:22:31 2 USBTNBTRF INC000014 3-Medium 03/31/20 08:31:44 3 GQPCW INC000015 2-High 03/31/20 08:28:43 I have the incident number in the table, How i give a hyper link to those Incident number to my Icident management URL specific to the incident. Code that I use: index=itsm sourcetype=remedy_midtier *Incident_Number* *Host:* NOT *-VO* NOT *WSG* NOT *IPA* NOT *ADS* NOT *-SEC* NOT "*WLNSGW*" AND ("*-LAN*" OR "*-WAN*" OR "*-APN*") AND "Node is down" | search $timetestD$ | rex field=_raw "Incident_Number\W(?<ITSM_Number>.*)\W\WIncident_Number\W.*" | rex field=_raw "(Host:\s)(?<Hostname>[^\.<]+\.)" | eval Hostname = upper(Hostname) | rex field=_raw "(Urgency:\s)(?<Urgency>\S-\D*[{lmwh}$])" | rex field=_raw "(AlertID:\s)(?<AlertID>[^\D*]+)" | rex field=_raw "(Open\s:\s)(?<Description>[^\.*]+)" | top limit=10000 Hostname, ITSM_Number, _time , Urgency, AlertID, Description |eval Hostname=replace(Hostname,"[.]","") | dedup ITSM_Number | rename Hostname as nodelabel | eval Time_CST=_time | sort -Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") | rename nodelabel as Hostname, ITSM_Number as "INC Number", AlertID as "Alert ID" | table Hostname, "INC Number",Urgency, Time_CST, Description | eval Description=substr(Description,1,150) |sort -Time_CST
I have been trying to setup the CISCO security suite for the ASA dashboards however when attempt to configure it for first time use I get a 500 Internal Server Error. I have had a look at all the oth... See more...
I have been trying to setup the CISCO security suite for the ASA dashboards however when attempt to configure it for first time use I get a 500 Internal Server Error. I have had a look at all the other articles about the same issue but to no avail. Below are the things I have tried: App permissions Timeout set 1400 Skip configuration step Remove and reinstall Disable SSL The error from the search view: 2020-03-31 15:11:02,694 ERROR [5e834f75af227365e59c8] error:335 - Traceback (most recent call last): File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\cherrypy_cprequest.py", line 628, in respond self.do_respond(path_info) File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\cherrypy_cprequest.py", line 687, in _do_respond response.body = self.handler() File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\cherrypy\lib\encoding.py", line 219, in __call_ self.body = self.oldhandler(*args, **kwargs) File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\cherrypy_cpdispatch.py", line 54, in call return self.callable(*self.args, **self.kwargs) File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\routes.py", line 383, in default return route.target(self, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-486>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 40, in rundecs return fn(*a, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-484>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 118, in check return fn(self, *a, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-483>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 166, in validate_ip return fn(self, *a, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-482>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 245, in preform_sso_check return fn(self, *a, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-481>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 284, in check_login return fn(self, *a, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-480>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 304, in handle_exceptions return fn(self, *a, **kw) File "<C:\Program Files\Splunk\Python-3.7\lib\site-packages\decorator.py:decorator-gen-475>", line 2, in listEntities File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\lib\decorators.py", line 359, in apply_cache_headers response = fn(self, *a, **kw) File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\appserver\mrsparkle\controllers\admin.py", line 1731, in listEntities app_name = eai_acl.get('app') AttributeError: 'NoneType' object has no attribute 'get'
I am trying to use tstats to develop a query, however i need _time to be included in the query for the logic to work. but _time doesnt show seconds and it is limiting to minutes only. the Web datamod... See more...
I am trying to use tstats to develop a query, however i need _time to be included in the query for the logic to work. but _time doesnt show seconds and it is limiting to minutes only. the Web datamodel that i am using is accelerated. 2020-03-31 08:45:00, this is the timeformat for _time when using tstats.
We use SA-ldapsearch to pull Active Directory data into the ES Assets & Identity framework. We do not currently ingest DHCP logs, but the IP address last seen for an AD computer is pulled in as part ... See more...
We use SA-ldapsearch to pull Active Directory data into the ES Assets & Identity framework. We do not currently ingest DHCP logs, but the IP address last seen for an AD computer is pulled in as part of the ldapsearch lookup gen search (below). Having recently updated to ES 6 and Splunk 8, I'm noticing that workstations are being combined in the Asset KV stores (assets_by_str) if they share an IP address. Since IP addresses change at different times and many of our users work from home with or without VPN, this is a common occurrence. This leads to ridiculous results in investigation in which the "source_hostname" ends up being mapped from the source (DHCP) IP address in the search result to an MV field of 50-60 hostnames all of which at some point or another in history had that IP address. I know that I can turn Asset correlation OFF in the ES configuration for Data Enrichment, but I don't want that, since hostnames are accurately resolved to user identities in many cases; also, old data is better than no data. I have considered conditionally eliminating IP addresses from our DHCP ranges by simply conditionally removing the IP record from the lookup gen search (below), but what I'm really looking for is a best practice. Is Splunk ES 6 designed to handle DHCP in some other way I'm not seeing? If not, this change seems asinine. No one could ever want the asset data for DHCP endpoints to be handled in this way. | ldapsearch domain=default search="(&(objectClass=computer))" | eval city="" | eval country="US" | eval priority="medium" | eval category="normal" | eval dns=dNSHostName | eval owner=description | rex field=sAMAccountName mode=sed "s/\$//g" | eval nt_host=sAMAccountName | makemv delim="," dn | rex field=dn "(OU|CN)\=(?<bunit>.+)" | eval requires_av="true" | eval should_update="true" | lookup dnslookup clienthost as dns OUTPUT clientip as ip | join managedBy [| ldapsearch search="(&(objectClass=user))" | rename distinguishedName AS managedBy, sAMAccountName AS managed_by_user | table managedBy managed_by_user] | table ip,mac,nt_host,dns,owner,managed_by_user,priority,lat,long,city,country,bunit,category,pci_domain,is_expected,should_timesync,should_update,requires_av | outputlookup ad_assets.csv
How I can move _time column to be the last on the an attached csv file in the email send by scheduled report the query returns the _time as the last column but in the attached mail it's set as a... See more...
How I can move _time column to be the last on the an attached csv file in the email send by scheduled report the query returns the _time as the last column but in the attached mail it's set as a fist column the query . . . | table USER_ID duser FIRST_NAME LAST_NAME Duration cn1 _time | rename cn1 as "Duration (sec)", FIRST_NAME as "First Name", LAST_NAME as "Last Name" | search "First Name"="" AND "Last Name"="" | outputcsv vpn_data.csv
We are trying to setup a Splunk License Manager and have it "automatically" pull in the licenses from within the container. The following is the yaml file we have. We can put all out files into... See more...
We are trying to setup a Splunk License Manager and have it "automatically" pull in the licenses from within the container. The following is the yaml file we have. We can put all out files into /tmp/splunk-license using a configMap but have not been able to copy them into the /opt/splunk/etc/licenses/enterprise using a "command" and have also not been able to have just an ENV variable work to pull in the 2 licenses. Any ideas? Process: Create a k8s yaml file apiVersion: apps/v1 kind: Deployment metadata: name: splunk-license-manager namespace: splunk labels: app: splunk-license-manager spec: replicas: 1 selector: matchLabels: app: splunk-license-manager template: metadata: labels: app: splunk-license-manager spec: containers: - name: splunk-license-manager image: splunk/splunk:8.0.2.1 env: - name: SPLUNK_HOME value: /opt/splunk - name: SPLUNK_ROLE value: splunk_license_master - name: SPLUNK_PASSWORD value: theGreatPassword - name: SPLUNK_LICENSE_URI value: /tmp/splunk-licenses/enterprise.lic,/tmp/splunk-licenses/itsi.lic - name: SPLUNK_LICENSE_INSTALL_PATH value: /tmp/splunk-licenses - name: SPLUNK_START_ARGS value: "--accept-license" - name: SPLUNK_INDEXER_URL value: indexer1,indexer2,indexer3,indexer4,indexer5,indexer6,indexer7,indexer8,indexer9 - name: SPLUNK_SEARCH_HEAD_URL value: search1,search2,search3 - name: DEBUG value: "true"
HI I'm trying to use splunk for perforce log analysis. Is there any APP or existing custom app for the same ?
Situation: - I have some records with a human readable field "Creation Date" (MM/DD/YYYY HH:MM:SS). - I'd like to sort by "Creation Date" Problem: - The sort command does not appear to w... See more...
Situation: - I have some records with a human readable field "Creation Date" (MM/DD/YYYY HH:MM:SS). - I'd like to sort by "Creation Date" Problem: - The sort command does not appear to work. I believe this is because it needs to be in epoch time to make the calculation. Proposed Solution: - Convert the field to epoch and run the sort command against the data set using the new epoch field.
Good morning, I have event time showing 4 hours ahead of the actual event. Can anyone point me in the right direction to get the difference corrected? The weird thing that when I run a search on my d... See more...
Good morning, I have event time showing 4 hours ahead of the actual event. Can anyone point me in the right direction to get the difference corrected? The weird thing that when I run a search on my deployment server it watches he times match, but not on my searchheads. Here is the props I am using for one of data sources I am seeing the difference in this is in the o365 app local folder? [o365:management:activity] TRUNCATE = 10485760 TIME_PREFIX = "CreationTime":\s*" KV_MODE = json TZ = US/Eastern The event time is 4 hours ahead of the actual event. Please let me know if you need more information? Thank you for your help with this.
Hello Everyone I have set of CSV files created and need to be monitored in splunk, but these csv files are not getting monitored in splunk. but if i change the extension from .csv to .txt or .log ... See more...
Hello Everyone I have set of CSV files created and need to be monitored in splunk, but these csv files are not getting monitored in splunk. but if i change the extension from .csv to .txt or .log then the files are getting monitored. Kindly suggest
Hi, Our environment has 3 indexers + 3 search heads in 1 site. We are planning to implement a DR/HA setup in 2 sites(multi cluster). Daily License will be 600 GB. Please advise a plan to setup a ... See more...
Hi, Our environment has 3 indexers + 3 search heads in 1 site. We are planning to implement a DR/HA setup in 2 sites(multi cluster). Daily License will be 600 GB. Please advise a plan to setup a DR/HA for the same. Like how many servers need to be added, memory, capacity.
Hello, When I download the Region Chart Viz from SplunkBase ( app/4911 ) and install it on my Splunk instance, I get a pseudo-CustomViz app called "Region Chart Viz" but : - It prompts directly ... See more...
Hello, When I download the Region Chart Viz from SplunkBase ( app/4911 ) and install it on my Splunk instance, I get a pseudo-CustomViz app called "Region Chart Viz" but : - It prompts directly because the app name is wrong - The Region Chart is not even in the app examples So I guess that the compilation has messed up at a point. Am I wrong ?