All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After a long-overdue upgrade from 6.x to 7.1.3 -- this release it the latest one supported by my vendor, who interoperates with Splunk -- I have a problem. The search head no longer works with the in... See more...
After a long-overdue upgrade from 6.x to 7.1.3 -- this release it the latest one supported by my vendor, who interoperates with Splunk -- I have a problem. The search head no longer works with the indexers.   On the search head: The full message in splunkd.log is: "Global key files are invalid. This server cannot distribute searches to other servers." In Settings  » Distributed search  » Search peers , we have error messages: Error [00000100] Instance name "<deleted>" REST interface to peer is not responding. Check var/log/splunk/splunkd_access.log on the peer. Last Connect Time:2020-09-14T20:04:01.000+00:00; Failed 1 out of 1 times. If I delete the distributed search head and attempt to re-validate it, I get the error: Encountered the following error while trying to save: Invalid action for this internal handler (handler: distsearch-peer, supported: list|edit|remove|_reload|new|disable|enable|doc, wanted: create).   The only way I've found to re-add the search peer is to restart Splunk on the search head.   Also of note on the search head: because of changes by my vendor -- as far as I can tell -- when I install the upgraded Splunk, the vendor automatically restores the old file $splunk/etc/auth//distServerKeys/trusted.pem. As a result, again as far as I can tell,  when I start Splunk for the first time, the file $splunk/etc/auth//distServerKeys/private.pem is never generated on the search head. The search peers, on the other hand, do have both files.   Also in splunkd.log, I see messages such as:   DistributedPeer - Peer:https://x.x.x.x:y Key problems, see internal logs   with no indication of where these "internal logs" can be found.   I also see   Bundle Replication: Problem replicating config (bundle) to search peer ' x.x.x.x:y ', HTTP response code 401 (HTTP/1.1 401 Unauthorized). call not properly authenticated   On the search peers:   The search peer logs do not currently show any particular issue. The splunkd.log shows, on both indexers (search peers):   WARN HTTPAuthManager - Token not specified in Authorization: Splunk <token> header   and in splunkd_access.log,   POST /services/receivers/bundle/<search head address> HTTP/1.0" 401 148 - - - 0ms   which provides no useful information.   On the search peeers, the directory $splunk/etc/auth/distServerKeys/<search head name> has an exact copy of the file $splunk/etc/auth//distServerKeys/trusted.pem.   Questions:   Why does this fail? Is this due to $splunk/etc/auth//distServerKeys/trusted.pem being present on the search head, with some incorrect key information? What does "Global key files are invalid" mean, and where can I find further information about how to fix them?   I welcome other suggestions -- as this includes suggestions for the right questions to ask.
Hello All, Using Splunk 8.0.5 and Cisco Firepower eStreamer eNcore Add-on for Splunk 3.6.8|4.0.7 (just finished installing it).  I was comparing ingests between Splunk and ArcSight and it would seem... See more...
Hello All, Using Splunk 8.0.5 and Cisco Firepower eStreamer eNcore Add-on for Splunk 3.6.8|4.0.7 (just finished installing it).  I was comparing ingests between Splunk and ArcSight and it would seem ArcSight has a few extra fields for certain rec_type=400 web events: -             Request                                             <malicious URL> -             requestContext                                 <Similar to the referrer> -             requestClientApplication                 <Similar to User Agent> ArcSight may be converting this from an additional payload field but I am having a hard time confirming how that is happening. For these events Splunk does receive an additional rec_type=110 with a type “HTTP URI” and an alphanumeric “data” field, but my events don’t include anything similar to a uri or referrer. I was wondering if anyone else run across this?    
I've around 400 servers distributed over 2 datacenters.(Datacenters are physically close to each other and may have some dedicated LAN channels as well) I'm supposed to control their UF using deploy... See more...
I've around 400 servers distributed over 2 datacenters.(Datacenters are physically close to each other and may have some dedicated LAN channels as well) I'm supposed to control their UF using deployment server.(Linux based deployment server) Should I use only 1 deployment server at a particular datacenter to manage UFs of other datacenter + his own data center. Or shall I keep 2 Deployment servers one at each datacenter. Their is a VPN connection between 2 datacenters DMZ where this deployment server will be hosted. And number of firewall punching would be same independent of UFs connecting to other data centers deployment server or same. What should I prefer?? When we say dedicated deployment server in Splunk enterprise does it mean search head and indexing is disabled. I need to have search head and indexer as well with deployment server to capture only internal UF logs for health checkup daily routines. Other logs would be forwarded to other dedicated indexers not in our control. In above case having search head and indexer shall I consider the capacity of deployment server to manage only 50 clients?? As per Splunk docs....or  can I sufficiently handle 400 servers with single or double deployment server?
Hello, I need to create a use case that monitors the bytes transferred from one host to another, either by SMB or by another method, be it from the LAN or the WAN. The log does not have any fields t... See more...
Hello, I need to create a use case that monitors the bytes transferred from one host to another, either by SMB or by another method, be it from the LAN or the WAN. The log does not have any fields that refer to the size of the transferred data. How can I monitor these transfers from one host to another?   I have this base to adjust it to what I need or accept suggestions (font) Thank you index=* ((tag=network tag=communicate) OR (sourcetype=pan*traffic OR sourcetype=opsec OR sourcetype=cisco:asa OR sourcetype=stream* )) action=allowed (app=smb OR dest_port=139 OR dest_port=445) | bucket _time span=1d | stats count by _time src_ip dest_ip dest_port  
Couple OF Questions related to Text Box Prompt.    I have a a text box prompt which has the restriction to only accept numeric value up to 9 digits.  The ask is to count the digits while user addin... See more...
Couple OF Questions related to Text Box Prompt.    I have a a text box prompt which has the restriction to only accept numeric value up to 9 digits.  The ask is to count the digits while user adding the entry to the text box so users would know how many digits has been added to the box.  I am not able to get around it I know there might be a java script that I can use that can calculate the digits while it is being added to the text box  and show the character count underneath the text box but cannot find the script  online.   One other requirement to create a info popup when users puts less than 8 digits when submit button is clicked.   
I'm doing this as a response to my GET:         self.response.setHeader('content-type', 'application/pdf')         self.response.write(open('/opt/splunk/bin/ffff.pdf','rb').read())   but I recei... See more...
I'm doing this as a response to my GET:         self.response.setHeader('content-type', 'application/pdf')         self.response.write(open('/opt/splunk/bin/ffff.pdf','rb').read())   but I receive only a partial base64 of the PDF file and the following error:   (most recent call last): File "/opt/splunk/bin/rest_handler.py", line 79, in <module> print splunk.rest.dispatch(**params) IOError: [Errno 11] Resource temporarily unavailable </msg>   any help is appreciated, thank you
Hi,  I'm looking to gain a better understanding of the number of HTTPS/HTTP requests the S3 client will send to both upload buckets and download buckets from the remote store. When a hot bucket r... See more...
Hi,  I'm looking to gain a better understanding of the number of HTTPS/HTTP requests the S3 client will send to both upload buckets and download buckets from the remote store. When a hot bucket rolls to warm and the S3 client is required to upload the bucket to the remote store does it make a PUT/POST request to the remote store for each file in the bucket or are multiple files batched into the same HTTPS/HTTP request?  I understand that if the file is more than 128MB (assuming the default for remote.s3.multipart_upload.part_size is kept) in size then the upload is broken into multiple requests, but I'm wondering if multiple files are added to the same request up to 128 MB? Similarly, when a search process requires the objects within a bucket that exists in the remote store is a single HTTP(S) request made for each object/file within the bucket? Also, it would be great to know if there are any other reasons why the S3 Client would send requests to the remote store other than uploading/downloading a bucket. My reason for asking is that the pricing for S3 storage on AWS is based on the number of requests: https://aws.amazon.com/s3/pricing/ Many Thanks,  Jamie
Splunk upgrade process seems to be very confusing from 7->8. I stop splunk using a systemctl splunk stop to stop the services because if i stop using the splunk user it starts again since splunk is ... See more...
Splunk upgrade process seems to be very confusing from 7->8. I stop splunk using a systemctl splunk stop to stop the services because if i stop using the splunk user it starts again since splunk is configured to as systemd service.  Edit the splunkd.service file as root as the new splunkd service file should not contain user=splunk and other commands. I use the file given by splunk here https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/RunSplunkassystemdservice I am using a rpm based install and i use: rpm -i --replacepkgs --prefix=/splunkdirectory/ splunk_package_name.rpm This is use to replace the exsisting install package of 7 and the new package is 8. This command allowed to be executed as splunk user and i need to be a root user else i get error if i run as non root user. error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Permission denied) Next i start the splunk as per the upgrade recommendation from splunk sudo splunk start   https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/RunSplunkassystemdservice#Upgrade_considerations_for_systemd   This gives me lots of "Invalid key in stanza" while starting , also the splunk is running as a root process now   -bash-4.2$ ps -ef | grep splunk root 31719 31229 0 18:08 pts/0 00:00:00 sudo su - splunk root 31721 31719 0 18:08 pts/0 00:00:00 su - splunk splunk 31722 31721 0 18:08 pts/0 00:00:00 -bash root 31806 1 8 18:09 ? 00:00:04 splunkd -p 8089 start root 31808 31806 0 18:09 ? 00:00:00 [splunkd pid=31806] splunkd -p 8089 start [process-runner] root 31834 31808 0 18:09 ? 00:00:00 mongod --dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo --port=8191 --timeStampFormat=iso8601-utc --smallfiles --oplogSize=200 --keyFile=/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key --setParameter=enableLocalhostAuthBypass=0 --replSet=A818B836-060F-4BA2-A42E-82AE5CF11FFA --sslMode=requireSSL --sslAllowInvalidHostnames --sslPEMKeyFile=/opt/splunk/etc/auth/server.pem --sslPEMKeyPassword=xxxxxxxx --sslDisabledProtocols=noTLS1_0,noTLS1_1 --sslCipherConfig=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256 --nounixsocket --noscripting root 31938 31808 2 18:09 ? 00:00:01 /opt/splunk/bin/python -O /opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/root.py --proxied=127.0.0.1,8065,8443 root 31996 31808 0 18:10 ? 00:00:00 /opt/splunk/bin/splunkd instrument-resource-usage -p 8089 --with-kvstore splunk 32063 31722 0 18:10 pts/0 00:00:00 ps -ef splunk 32064 31722 0 18:10 pts/0 00:00:00 grep --color=auto splunk   Tried to stop the splunk process and run again as user splunk, splunk process starts ok but the splunk daemon is dead systemctl status splunk ● splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/splunkd.service; enabled; vendor preset: disabled) Active: inactive (dead) (Result: exit-code) After starting the splunk daemon it is still in failed state, complaining about permissions.   splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/splunkd.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Mon 2020-09-14 18:17:07 UTC; 871ms ago Process: 32730 ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=4) Main PID: 32730 (code=exited, status=4)  systemd[1]: splunkd.service: main process exited, code=exited, status=4/NOPERMISSION  systemd[1]: Unit splunkd.service entered failed state.  systemd[1]: splunkd.service failed.  splunkd.service holdoff time over, scheduling restart.  systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'.  systemd[1]: start request repeated too quickly for splunkd.service  Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'.  systemd[1]: Unit splunkd.service entered failed state.  systemd[1]: splunkd.service failed. Has anyone faced these same issues? Am i working in the correct order or do i need to change the order or am i missing something in between?
I'm reviewing the logs to make sure the fields match the Splunk Enterprise Security CIM and datamodels. The query shows me this percentage, understanding that they are the fields that are required v... See more...
I'm reviewing the logs to make sure the fields match the Splunk Enterprise Security CIM and datamodels. The query shows me this percentage, understanding that they are the fields that are required versus the fields that it is finding, in this order of ideas, to adjust these fields I must create an alias or I must perform an "extract" either by regular expressions or tabs?
I registered and got setup with SplunkCloud Trial version a couple of days ago. All was fine, until I was logged out. All attempts to log back in as "sc_admin" lead to "Login Failed" message. (I h... See more...
I registered and got setup with SplunkCloud Trial version a couple of days ago. All was fine, until I was logged out. All attempts to log back in as "sc_admin" lead to "Login Failed" message. (I had created sc_power, sc_user, sc_apps users for various roles, … all with the same password as sc_admin's, ... and can still login as those userids.   I know the password is correct.) There are no hints or opportunities to reset the password for "sc_admin". How can I log back in as "sc_admin"?
Hi everyone someone has made the integration of intsights  with Splunk Enterprise, If so, how was it carried out? Thanks
Hi Splunk Gurus I have been trying to build my first technical addon. I just coded the validate inputs with a rest call to check the input stanza configuration. But when I am trying to create an inp... See more...
Hi Splunk Gurus I have been trying to build my first technical addon. I just coded the validate inputs with a rest call to check the input stanza configuration. But when I am trying to create an input stanza I keep getting this error 'NoneType' object has no attribute 'get_proxy_settings'. I have not configured any proxy. Why am I getting this error? Please help. Thank you in advance. The REST calls work fine in collect_events. 
Hi, I'm trying to get Amazon S3 bucket data into Splunk Cloud. I'm using the trial version, as of now. Following this article, https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Admin/AWSGD... See more...
Hi, I'm trying to get Amazon S3 bucket data into Splunk Cloud. I'm using the trial version, as of now. Following this article, https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Admin/AWSGDI, I need and IDM (Input Data Manager). Is this available in the trial version? If yes, how can i access it? Also, can Splunk Cloud index parquet files? Any help is appreciated, Thanks, Krishna Vasudevan  
Hi, I want to integrate Splunk to Rally. Is there a way to get the rally defect data from rally to Splunk. If it's possible through API can anyone please share the API and end to end process how t... See more...
Hi, I want to integrate Splunk to Rally. Is there a way to get the rally defect data from rally to Splunk. If it's possible through API can anyone please share the API and end to end process how to get data from Rally.
Hello everyone. I'm testing a SmartStore configuration and I'm getting some errors when Splunk tries to move the buckets to the remote storage, which I suspect have to do with authentication. The en... See more...
Hello everyone. I'm testing a SmartStore configuration and I'm getting some errors when Splunk tries to move the buckets to the remote storage, which I suspect have to do with authentication. The endpoint provided to me is “<bucket_name>.s3-accesspoint.us-east-1.amazonaws.com" and I'm trying to confirm if I should be using http or https. The CacheManger and S3Client errors I'm getting are, using http "status=failed, reason="HTTP Error 307: Temporary Redirect"", and using https "status=failed, reason="HTTP Error 400: Bad Request"" This is my volume and index configuration. I was wondering if anyone could spot any obvious config errors or point me in the right direction.   [volume:s3] storageType = remote path = s3://<bucket_name>/ remote.s3.endpoint = https://<bucket_name>.s3-accesspoint.us-east-1.amazonaws.com remote.s3.access_key = <access_key> remote.s3.secret_key = <secret_key> [smartstore_test] remotePath = volume:s3/smartstore_test ##retention settings## maxGlobalDataSizeMB = 0 #38 months frozenTimePeriodInSecs = 99900000 repFactor = auto homePath = $SPLUNK_DB/smartstore_test/db thawedPath = $SPLUNK_DB/smartstore_test/thaweddb coldPath = $SPLUNK_DB/smartstore_test/colddb          
I am trying to figure out how I can track the timestamp whenever I changed the status of any recently opened investigation so that I can have the control of that, I have checked the ES audit section,... See more...
I am trying to figure out how I can track the timestamp whenever I changed the status of any recently opened investigation so that I can have the control of that, I have checked the ES audit section, specifically the Investigation Overview but there is anything similar to this. I also checked the _audit index but not sure if the investigation roles are tracked there which in turn would be a really good option to observe. Thanks
I support the ServiceNow instance at the company I work for.  We have a group that uses Splunk Phantom. They asked us for a userid, which we created for them to use the oob ServiceNow API. There isn... See more...
I support the ServiceNow instance at the company I work for.  We have a group that uses Splunk Phantom. They asked us for a userid, which we created for them to use the oob ServiceNow API. There isn't any Splunk software installed on the ServiceNow side. The group that manages Splunk Phantom said they are trying to query ServiceNow to see Request Item records and variables. Request Item records are records in ServiceNow used by the ServiceNow Service Catalog to request goods and services. Catalog Items have variables which are questions the requester answers for the catalog item they are requesting.   Attached is from the Splunk admin who is not able to see the variables on the Request Item records in ServiceNow. Does anyone know what the attached screen shots indicate and how does Splunk query Service Catalog items and variables in ServiceNow?      
i have 3 stats panel in a row, i need first panel width as 40% and 2nd panel as 40% and 3rd panel is 20% (if solution given in pixels its very better) by the way below is not my requirement on... See more...
i have 3 stats panel in a row, i need first panel width as 40% and 2nd panel as 40% and 3rd panel is 20% (if solution given in pixels its very better) by the way below is not my requirement one of scenario  there solution is given for different types of panel but here i have same type of panel. it would be better if solution provided from UI itself rather than changes from backend Splunk infra.   Thanks.
Hi, I'm observing dip in OHS Graph in splunk for every 30 mins. Why i'm seeing dip in graph every 30 minutes. Is there any chance of changing it to 45 mins in splunk OHS graph?   Thank you in adv... See more...
Hi, I'm observing dip in OHS Graph in splunk for every 30 mins. Why i'm seeing dip in graph every 30 minutes. Is there any chance of changing it to 45 mins in splunk OHS graph?   Thank you in advance!!!!!!!!  
I've got a specific requirement to fine tune a search. The search is something like..   <basesearch> | fields other_fields,host,username | join type=left host username [ `mycomplexmacro`| fields ma... See more...
I've got a specific requirement to fine tune a search. The search is something like..   <basesearch> | fields other_fields,host,username | join type=left host username [ `mycomplexmacro`| fields macro_fields,pci_flag,host,username] | table *    The issue I'm facing is if `pci_flag=no`, then I want to ensure the join does NOT include `host`, but if the `pci_flag=yes` I want to be strict and compare host && username. Unfortunately the `pci_flag` is not present in the <basesearch>, so the only way to determine is after the inner-search. So essentially I want the  search to turn to below style if `pci_flag=no` (See the host is not in join anymore)   | join type=left username [ `mycomplexmacro`| fields macro_fields,pci_flag,username]   I want the  search to turn to below style if `pci_flag=yes` (See the host present and strict)   | join type=left host username [ `mycomplexmacro`| fields macro_fields,pci_flag,host,username]     I tried options like below making,  but in vain   eval host=if(pci_flag==no,"*",host)