All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day all,  I am new here and will be grateful and appreciative for all communication here towards achieving a great result to solving my issues.... I got stuck installing the Splunk Enterprises... See more...
Good day all,  I am new here and will be grateful and appreciative for all communication here towards achieving a great result to solving my issues.... I got stuck installing the Splunk Enterprises using the command copied from the official site, I will be grateful if this is resolved in no distant time... sudo dpkg -I splunk-9.2.2.... After inserting the command, it shows the below Error... dpkg: Error Pls help...
I'm using Spunk Cloud Search & Reporting with Kubernetes 1.25 using Splunk OTel Collector 0.103.0.  I have kubernetes pods with multiple containers.  Most of the containers have their logs scraped an... See more...
I'm using Spunk Cloud Search & Reporting with Kubernetes 1.25 using Splunk OTel Collector 0.103.0.  I have kubernetes pods with multiple containers.  Most of the containers have their logs scraped and sent to the splunk index based on the 'splunk.com/index' namespace annotation; so normal Splunk OTEL Collector log scraping.  But one of the container's logs must go to a different index.  The pods that have a container whose logs must be routed differently have a pod annotation like 'splunk-index-{container_name}=index'.  I had this working in Splunk Connector for Kubernetes using this config;     ```yaml customFilters: # # filter that set's the splunk_index name from a container annotation # - The annotation is a concatenation of 'splunk-index-' and # the name of the container. If that annotation exists on # the container, then it is used as the splunk index name, # otherwise the default index is used. # - This is used in june-analytics and june-hippo-celery to route # some logs via an annotated sidecar that tails a log file from # the primary application container. # - This could be used by any container to specify it's splunk index. # SplunkIndexOverride: tag: tail.containers.** type: record_transformer body: |- enable_ruby <record> splunk_index ${record.dig("kubernetes", "annotations", "splunk-index-" + record["container_name"]) || record["splunk_index"]} </record> ```     My attempt to do this with Splunk OTEL collector uses following config in the values.yaml file for the Splunk OTEL collector v.103.0. Helm chart to add a processor to check for the annotation:     ```yaml agent: config: processors: # set the splunk index for the logs of a container whose pod is annotated with `splunk-index-{container_name}=index` transform/logs/analytics: error_mode: ignore log_statements: - context: log statements: - set(resource.attributes["com.splunk.index"], resource.attributes[Concat("splunk-index-", resource.attributes["container_name"], "")]) where resource.attributes[Concat("splunk-index-", resource.attributes["container_name"], "")] != nil ```      The splunk-otel-collector logs show this error: Error: invalid configuration: processors::transform/logs/analytics: unable to parse OTTL statement "set(resource.attributes[\"com.splunk.index\"], resource.attributes[Concat(\"splunk-index-\", resource.attributes[\"container_name\"], \"\")]) where resource.attributes[Concat(\"splunk-index-\", resource.attributes[\"container_name\"], \"\")] != nil": statement has invalid syntax: 1:65: unexpected token "[" (expected ")" Key*) It seems it does not like the use of Concat() to create a lookup key for attributes.  So how would I do this in Splunk OTEL Collector?
The data coming into one of our indexers recently changed. Now the format is different, and the fields are different. The values are basically the same. I need to be able to find where this index and... See more...
The data coming into one of our indexers recently changed. Now the format is different, and the fields are different. The values are basically the same. I need to be able to find where this index and data was being used in our environment's lookups, reports, alerts and dashboards. Any idea how this can be accomplished?
Hi All; I have list of events, which includes a field called reported_date, format is yyyy-mm-dd. I'm trying to create a search that looks for reported_date within the last 15 months of current day... See more...
Hi All; I have list of events, which includes a field called reported_date, format is yyyy-mm-dd. I'm trying to create a search that looks for reported_date within the last 15 months of current day.  Is it possible to do an earliest and latest search within a specific field? Note: _time does not align with the reported_date. Any assistance would be greatly appreciated! TIA
Hello, be careful TA-WALLIX_Bastion/default/passwords.conf can crash Configuration page of its own settings/configuration page + other addons.  
Trying to create a search that will show which capabilities a user has used within the last year.
I have been experimenting with splunk-ui and created an app to make calls from splunk web to the splunk REST API. However, I keep getting errors like this:   The same origin policy prohibits acces... See more...
I have been experimenting with splunk-ui and created an app to make calls from splunk web to the splunk REST API. However, I keep getting errors like this:   The same origin policy prohibits access to external resource at https://localhost:8090/servicesNS/nobody/path_redacted_but_is_valid?output_mode=json. (Reason: CORS-Header 'Access-Control-Allow-Origin' is missing)   This is how the call looks like   const url = `https://localhost:8090/servicesNS/nobody/${eventType.acl.app}/saved/eventtypes/${eventType.title}?output_mode=json` const response = await fetch(url, { credentials: "include", method: "POST", redirect: "follow", body: JSON.stringify({'search': eventType.content.search}) }); return response.json();   This is my server.conf     [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/cert.pem [httpServer] crossOriginSharingPolicy = https://localhost:8090 crossOriginSharingHeaders = *     I can access https://localhost:8090/servicesNS/*  "by it self" in my browser. I am using Firefox 128 and splunk 9.0.5 I can set crossOriginSharingPolicy to "*" (without quotes), but that will cause the browser to reject any requests that require authentication, so this is no solution
Hello,  I am running SPLUNK 9.1.2 on Linux and ever since I installed a new internal certificate, I am not able to run SPLUNK. Below are some of the warnings I was about to find in splunkd.log. Woul... See more...
Hello,  I am running SPLUNK 9.1.2 on Linux and ever since I installed a new internal certificate, I am not able to run SPLUNK. Below are some of the warnings I was about to find in splunkd.log. Would anyone have any idea of how this can be addressed and fixed? Thank you for any suggestions!   WARN  SSLCommon [12196 webui] - Received fatal SSL3 alert. ssl_state='error', alert_description='handshake failure'. WARN  HttpListener [12196 webui] - Socket error from “…” while idling: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher WARN  HttpListener [12196 webui] - Socket error from “…” while idling: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknow protocol WARN  SSLCommon [12196 webui] - Received fatal SSL3 alert. ssl_state='error', alert_description='bad record mac'. WARN  HttpListener [12196 webui] - Socket error from “…” while idling: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac WARN  SSLCommon [12196 webui] - Received fatal SSL3 alert. ssl_state='error', alert_description='decrypt error'. WARN  HttpListener [12196 webui] - Socket error from “…” while idling: error:1408C095:SSL routines:ssl3_get_finished:digest check failed    
I am looking into developing a custom splunk app that lets us manage our knowledge objects in bulk. The idea is to create custom REST endpoints and call them from a splunk-ui based app in splunk web... See more...
I am looking into developing a custom splunk app that lets us manage our knowledge objects in bulk. The idea is to create custom REST endpoints and call them from a splunk-ui based app in splunk web. Looking into the documentation of custom REST endpoints (https://dev.splunk.com/enterprise/docs/devtools/customrestendpoints/customrestscript), it strikes me that the sample code imports a splunk module that I can find no documentation for. Am I missing out on something here? Am I just supposed to use splunklib? I would greatly appreciate feedback regarding the plan to use custom REST endpoints with a splunk web based UI.
Hi there,   We are currently ingesting Palo Alto threat logs into Splunk although we are missing the 'URL' log_subtype. Current subtypes being ingested for PA threat logs include spyware, vulnerab... See more...
Hi there,   We are currently ingesting Palo Alto threat logs into Splunk although we are missing the 'URL' log_subtype. Current subtypes being ingested for PA threat logs include spyware, vulnerability and file subtypes. Can anyone please tell me what changes need to be made on the Palo Alto and/or Splunk side to ingest log_subtype=URL threat logs?     Regards, CW
Hi there, i got issue when setting connector Splunk in OpenCTI When i check logs, it says terminated i follow guide from this man here https://the-stuke.github.io/posts/opencti/#connectors alr... See more...
Hi there, i got issue when setting connector Splunk in OpenCTI When i check logs, it says terminated i follow guide from this man here https://the-stuke.github.io/posts/opencti/#connectors already open token, crate API livestream at opencti, also already create collections.conf and add [opencti] at $SPLUNK_HOME/etc/apps/appname/default/. Btw im using search app so i create collections.conf at $SPLUNK_HOME/etc/apps/appname/default/ because i don't know value of field from opencti to send so i don't create any field list in [opencti] My connections setting like this : connector-splunk: image: opencti/connector-splunk:6.2.4 environment: - OPENCTI_URL=http://opencti:8080 - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN} # Splunk OpenCTI User Token - CONNECTOR_ID=MYSECRETUUID4 # Unique UUIDv4 - CONNECTOR_LIVE_STREAM_ID=MYSECRETLIVESTREAMID # ID of the live stream created in the OpenCTI UI - CONNECTOR_LIVE_STREAM_LISTEN_DELETE=true - CONNECTOR_LIVE_STREAM_NO_DEPENDENCIES=true - "CONNECTOR_NAME=OpenCTI Splunk Connector" - CONNECTOR_SCOPE=splunk - CONNECTOR_CONFIDENCE_LEVEL=80 # From 0 (Unknown) to 100 (Fully trusted) - CONNECTOR_LOG_LEVEL=error - SPLUNK_URL=http://10.20.30.40:8000 - SPLUNK_TOKEN=MYSECRETTOKEN - SPLUNK_OWNER=zake # Owner of the KV Store - SPLUNK_SSL_VERIFY=true # Disable if using self signed cert for Splunk - SPLUNK_APP=search # App where the KV Store is located - SPLUNK_KV_STORE_NAME=opencti # Name of created KV Store - SPLUNK_IGNORE_TYPES="attack-pattern,campaign,course-of-action,data-component,data-source,external-reference,identity,intrusion-set,kill-chain-phase,label,location,malware,marking-definition,relationship,threat-actor,tool,vocabulary,vulnerability" restart: always depends_on: - opencti   Hope my information is enough to get solved
Hi Guys, we have a doubt reagarding the user that execute Splunk on a linux environment. Until now, we have always avoided use of root user; instead, we have always  installed and configured Splunk ... See more...
Hi Guys, we have a doubt reagarding the user that execute Splunk on a linux environment. Until now, we have always avoided use of root user; instead, we have always  installed and configured Splunk on Linux in the following way: Create a dedicated user, eg. "splunkuser" Change ownership of splunk installation folder to that user Configure splunk for boot autorun with dedicated user What is not clear for us, and we didn't found on doc, is: suppose this user belongs to sudoers group. Here 2 question rise: It can be removed from sudoers, or some Splunk related process require it belong to that group? If it cannot be totally removed from sudoers, which process requires it maintain such kind of privileges?
I am trying to write a search query as part of our alerting.  The intention is that if search results come from a certain container (kubernetes.container_name) e.g. service1, service2 or service3 the... See more...
I am trying to write a search query as part of our alerting.  The intention is that if search results come from a certain container (kubernetes.container_name) e.g. service1, service2 or service3 then I should set a priority of 2. The problem is that the service names have a version number appended to them e.g. service1-v1-0 or service3-v3-0b to give two such examples. My intention was to use a combination of 'if' and 'match' with a wildcard to achieve this but it doesn't seem to work.  The reason for me using this was that the start of the container_name would remain the same and as the version numbers change the search that I'm attempting to write should be future proofed. | eval priority = if(match(kubernetes.container_name, "^service1-v*|^service2-v*|^service3*"), "2", "Not set") However these are returning "Not set" even when the expected kubernetes.container_name is used. Can someone help me understand why this isn't working, how to fix and whether there might be a better way of doing this? Thanks!
I have this query which is working well in Splunk8 whereas I am getting timechart with wrong values in Splunk9. Is there any chage in timchart or case function that may cause this query not to work p... See more...
I have this query which is working well in Splunk8 whereas I am getting timechart with wrong values in Splunk9. Is there any chage in timchart or case function that may cause this query not to work perfectly?   index=my_index sourcetype=jetty_access_log host="apiserver--*" url="/serveapi*" | eval status_summary=case(status<200, "Invalid",status<300, "2xx", status<400, "3xx",status <500, "4xx",status<600, "5xx",True(),"Invalid") | timechart span=5m count(eval(status_summary="2xx")) as count_http2xx, count(eval(status_summary="3xx")) as count_http3xx, count(eval(status_summary="4xx")) as count_http4xx, count(eval(status_summary="5xx")) as count_http5xx, count(eval(status_summary="Invalid")) as count_httpunk This screenshot below shows the correct result (Splunk   This screenshot shows the incorrect result ( Splunk 9)      
I have 2 lookups . first lookup have multiple fields including Hostname and the second lookup have only Hostname field . I want to find common Hostname from both lookups ,How can i do that?
Hi,  I have been working for a day or so on getting this to work.  Here is my web.conf file configuration  [settings] httpport = 443 loginBackgroundImageOption = custom loginCustomBackgroundIma... See more...
Hi,  I have been working for a day or so on getting this to work.  Here is my web.conf file configuration  [settings] httpport = 443 loginBackgroundImageOption = custom loginCustomBackgroundImage = search:logincustombg/Hub_Arial.jpg enableSplunkWebSSL = true privKeyPath = /opt/splunk/share/splunk/certs/decrypted-xxx_2025_wc.key serverCert = /opt/splunk/share/splunk/certs/splunk-wc-xxx-com.pem the key file is the decrypted certificate, the server cert is the server cert, the intermediate and root from godaddy g2.  I am currently running splunk 9.2.x  I am not finding a quick how to to complete this task and I am working if someone could give me a hand.  I am using Wireshark off the firefox browser and sometimes I see the client request but never see the server request.  I am not doing this from the command line or the web server, I am updating the files and restarting splunkd.  Thanks in advance for your assistance.  Please let me know if there are any questions.  
Hi all,   I am currently having trouble finding the steps on how to forward the Syslogs from an Aruba switch into Splunk. The Aruba switch is set up to forward the syslogs through the correct IP to... See more...
Hi all,   I am currently having trouble finding the steps on how to forward the Syslogs from an Aruba switch into Splunk. The Aruba switch is set up to forward the syslogs through the correct IP to Port 9997 which is the Splunk Default. My issue is that these Syslogs are not coming through or not visible. I have confirmed the computer can detect the switch and the switch sees the computer, Why are the syslogs not being forwarded? I have installed the Aruba Network Add-on for Splunk but the result has not changed. If someone know the correct steps to set this up would they be able to provide them? Any help is greatly appreciated. Kind regards, Ben 
I have the following pipe separated value file that I am having problems onboarding.  The first row is the column headers. Second row is sample data.  I'm getting an error when attempting to use the ... See more...
I have the following pipe separated value file that I am having problems onboarding.  The first row is the column headers. Second row is sample data.  I'm getting an error when attempting to use the CREA_TS column as the timestamps CREA_TS|ACTION|PRSN_ADDR_AUDT_ID|PRSN_ADDR_ID|PRSN_ID|SRC_ID 07102024070808|INSERT|165713232|147994550|101394986|OLASFL   This is what I have for props but I cannot get it to identify the timestamp.   Any help will be greatly appreciated. [ psv ] CHARSET=UTF-8 FIELD_DELIMITER=| HEADER_FIELD_DELIMITER=| INDEXED_EXTRACTIONS=psv KV_MODE=none SHOULD_LINEMERGE=false category=Structured description=Pipe-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true LINE_BREAKER=([\r\n]+) TIMESTAMP_FIELDS=CREA_TS TIME_PREFIX=^ MAX_TIMESTAMP_LOOKAHEAD=14
I'm trying to understand how Splunk apps that interface with other systems via an API get their API calls scheduled.  I'm not a Splunk app developer - just an admin curious about how the apps loaded ... See more...
I'm trying to understand how Splunk apps that interface with other systems via an API get their API calls scheduled.  I'm not a Splunk app developer - just an admin curious about how the apps loaded on my Splunk environment work.   When you install and configure an app that has a polling interval configured within it, how does Splunk normally control the periodic execution of that API call?  Does it use the Splunk scheduler, or does each app have to write some kind of cron service or compute a pause interval between each invocation of the API?   Seems to me like the scheduler would be the obvious choice, but I can't find anything in the Splunk documentation to tell me for certain that the scheduler is used for apps to periodically call polled features.
Any suggestions would be appreciated. In the first row I would like to show only the first 34 characters and in the second row the first list 39 characters. I figured out how to only show a ... See more...
Any suggestions would be appreciated. In the first row I would like to show only the first 34 characters and in the second row the first list 39 characters. I figured out how to only show a certain number of characters for all rows, but not individual rows. | eval msgTxt=substr(msgTxt, 1, 49) | stats count by msgTxt