All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I am trying to run Splunk using kubernetes on my M3 mac. When executing the command: (as described here https://github.com/splunk/splunk-operator/blob/main/docs/README.md#installing-the-splunk-... See more...
Hi,  I am trying to run Splunk using kubernetes on my M3 mac. When executing the command: (as described here https://github.com/splunk/splunk-operator/blob/main/docs/README.md#installing-the-splunk-operator) cat <<EOF | kubectl apply -n splunk-operator -f - apiVersion: enterprise.splunk.com/v4 kind: Standalone metadata: name: s1 finalizers: - enterprise.splunk.com/delete-pvc EOF I am getting the error:  Failed to pull image "splunk/splunk:9.1.3": no matching manifest for linux/arm64/v8 in the manifest list entries   What do I need to do?
I have some configurations in local app.conf and I would like to read them pragmatically. before streaming events How to do it using python? Thanks!
hi guys i want ask some of the value in "main" Tables,  actually i'm tried to figure out for a some CPU Memory form a one Servers so i tried to like below the SPL   index="main" host="MyServer" ... See more...
hi guys i want ask some of the value in "main" Tables,  actually i'm tried to figure out for a some CPU Memory form a one Servers so i tried to like below the SPL   index="main" host="MyServer" |field _time,host,source,sourcetype,cllection,counter,instance,linecount, object,Value    -- here is the question    so in this case, where's from the value's  low data in server? i try to matched my servers cpu memory form the process exploroer  but i'm not sure.... cause the wave is so fastly shaking can you give me other advice what ever i can solve this question    thanks 
Hi Fellow Splunkers, Perhaps I can get some different perspective, I am setting up a new standalone SH to be joined to an existing indexer cluster, but I seem to be running into an issue where when ... See more...
Hi Fellow Splunkers, Perhaps I can get some different perspective, I am setting up a new standalone SH to be joined to an existing indexer cluster, but I seem to be running into an issue where when I try to point this server to the idx cluster, specifying the idx CM as the manager [manager_uri], I get an error where the SH will not be joined as a SH node. I am referencing the DOCS here: Enable the search head - Splunk Documentation I also note that there is an existing SH cluster that is joined to the indexer cluster. When I edit the server.conf I get an error that the SH cannot connect to the manager node, even though I have verified and double-checked the stanzas and key values.  From what I have described, what might be the issue?  
I'm running universalforwarder as a service in docker, here is my docker-compose config: services:       services: splunkuniversalforwarder: platform: "linux/amd64" hostname: splunk... See more...
I'm running universalforwarder as a service in docker, here is my docker-compose config: services:       services: splunkuniversalforwarder: platform: "linux/amd64" hostname: splunkuniversalforwarder image: splunk/universalforwarder:latest volumes: - opt-splunk-etc:/opt/splunk/etc - opt-splunk-var:/opt/splunk/var - ./splunk/splunkclouduf.spl:/tmp/splunkclouduf.spl ports: - "8000:8000" - "9997:9997" - "8088:8088" - "1514:1514" environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_USER=root - SPLUNK_ENABLE_LISTEN=9997 - SPLUNK_CMD="/opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl" - DEBUG=true - SPLUNK_PASSWORD=<root password> - SPLUNK_HEC_TOKEN=<HEC token> - SPLUNK_HEC_SSL=false​         I have a HTTP Event Collector configured in my Splunk Free Trial account.   When running docker-compose a lot of things seem to be going well and then i hit this:       TASK [splunk_universal_forwarder : Setup global HEC] *************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: POST/services/data/inputs/http/httpadmin********8089{'disabled': '0', 'enableSSL': '0', 'port': '8088', 'serverCert': '', 'sslPassword': ''}NoneNoneNone;;; AND excep_str: No Exception, failed with status code 404: {"text":"The requested URL was not found on this server.","code":404}         I can see no reference to POST/services/data/inputs/http/httpadmin in any Splunk docs Can anyone shed any light on this please? 
Hello Splunk community! I have started my journey with splunk one month ago and I am currently learning Splunk Enterprise Security.  I have a very specific question, I am planning to use about 10-... See more...
Hello Splunk community! I have started my journey with splunk one month ago and I am currently learning Splunk Enterprise Security.  I have a very specific question, I am planning to use about 10-15 correlation searches in my ES and I would like to know if I need to upscale my resources for my Splunk machine, which is ubuntu server 20.04 with 32 GB RAM, 32 vCPU, and 200 GB hard disk.  I am have all-in-one installation scenario because I am just learning the basics of Splunk at the moment, but I would like to know: How much resources do correlation searches in Splunk consume? How much RAM and CPU separately does one average correlation search consume in Splunk Enterprise Security? 
Hello Team, We have a requirement to support Protobuf data ingestion for Splunk Endpoint. Many customers have expressed interest in sending data to Splunk in Protobuf Messages and making it availabl... See more...
Hello Team, We have a requirement to support Protobuf data ingestion for Splunk Endpoint. Many customers have expressed interest in sending data to Splunk in Protobuf Messages and making it available for search. What's the input? https://github.com/open-telemetry/opentelemetry-proto/blob/v1.0.0/opentelemetry/proto/collector/logs/v1/logs_service.proto The input would be the ProtoBuf Message: ExportLogsServiceRequest unmarshalled proto [ resource:{attributes:{key:"cloud.provider" value:{string_value:"data"}} attributes:{key:"ew_id" value:{string_value:"3421"}} attributes:{key:"ip" value:{string_value:"0.1.0.1"}}} scope_logs:{log_records:{time_unix_nano:1714188733 observed_time_unix_nano:1714188733 severity_text:"FATAL" body:{string_value:"onOriginRequest%20error%20level%2065553GXK3l7A1TG7QNiNsif0M4eZ7RmimyGeSu8GfyjGQTmbxjOEpDktybtjuWpb"} attributes:{key:"requestId" value:{string_value:"123456 Fp5zWvbr2cdYaOgC2LmC7hEs2"}} attributes:{key:"custom" value:{string_value:"3421 LUl8ovNHb6jO9Ak"}} attributes:{key:"queueit" value:{string_value:"1.2.3 sWcAL"}} attributes:{key:"ds2custom_message" value:{string_value:"Splunk POC Request 3qE2lAUxf0iDyCcxeNZkra3gK"}} trace_id:"\xd3\xcd8\xd3m5\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4" span_id:"ӽ7\xd3m5\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4\xd7]u"}} ]   curl -k -vvv -H "Authorization: Splunk XXXXX" -H 'Content-Type: application/x-protobuf' 'https://prd-p-pwf16.splunkcloud.com:8088/services/collector' --data-binary @data How to ingest the probuf message?
Hi I have a vast data set with a sample as below. Need to group the data based on three columns latest timestamp data and get the fourth column value against the latest timestamp found for that grou... See more...
Hi I have a vast data set with a sample as below. Need to group the data based on three columns latest timestamp data and get the fourth column value against the latest timestamp found for that grouped data. Deployed_Data_time env app version 4/16/2024 15:29 axe1 app1 v-228 4/16/2024 15:29 axe1 app1 v-228 9/15/2023 8:12 axe1 app1 v-131 9/15/2023 8:05 axe2 app1 v-120 9/12/2023 1:19 axe2 app1  v-128 4/16/2024 15:29 axe2 app2 v-628 4/16/2024 15:26 axe2 app2 v-626 9/15/2023 8:12 axe2 app2 v-531 9/15/2023 8:05 axe1 app2 v-530 9/12/2023 1:19 axe1 app2  v-528   and I need the output as  app axe1 axe2 app1 v-228 v-120 app2 v-530 v-628   And I tried something as below but output is not as expected.   index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_time."|".version | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app | rex field=latestVersion "[\|]+(?<version>.*)" | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"    Please help me achieve this . Thanks 
In a dashboard showing diff data in a panel, but when we open the panel query using "open in search" its showing correctly.       <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <f... See more...
In a dashboard showing diff data in a panel, but when we open the panel query using "open in search" its showing correctly.       <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <row> <panel> <table> <search> <query> index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA" <query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentageRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host_ip> <colorPalette type="map">{"12.234.201.22":#53A051, "10.457.891.34":#53A051,"10.234.34.18":#53A051,"10.123.363.23":#53A051}</colorPalette> </format> <format type="color" field="local"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="FilesofDMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Files created localley on AMP"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="File sent to DMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Error"> <colorPalette type="map">{"Job didn't run today":#DC4E41}</colorPalette> </format> <format type="color" field="Host Data Details"> <colorPalette type="map">{"HOM-jjderf - 10.123.34.18":#53A051"HOM-iytgh - 10.123.363.23":#53A051, HOP-wghjy - 12.234.201.22":#53A051, "HOP-tyhgt - 12.234.891.34":#53A051}</colorPalette> </format> </table> </panel> </row> </form>       Panel displaying in dashboard: When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to DMA HOM-jjderf - 10.123.34.18 HOM-iytgh - 10.123.363.23 HOP-wghjy - 12.234.201.22 HOP-tyhgt - 12.234.891.34   221 86  
Hello, I recently encountered an issue with Splunk Cloud. After creating a new eval in the "Fields" menu under "calculated fields," named 'src' for the source type "my_source_type," I adjusted the p... See more...
Hello, I recently encountered an issue with Splunk Cloud. After creating a new eval in the "Fields" menu under "calculated fields," named 'src' for the source type "my_source_type," I adjusted the permissions to make it readable and writable for my role, with app permissions set to all apps. However, upon saving these permissions, the eval disappeared, and I couldn't locate it anywhere. Thinking it might not have saved properly, I attempted to recreate it with the same name and source type. However, when I tried to adjust the permissions, I received a red error banner stating: "Splunk could not update permissions for resource data/props/calcfields [HTTP 409] [{'type': 'ERROR', 'code': None, 'text': 'Cannot overwrite existing app object'}]" Any recommendations on where I should search to locate the initially created eval that seems to have gone missing? Thank you.
Hello Team, I have a error data coming to index (we filtered to send only error logs to this index ), I wanted to create an alert when ever there is any new events coming to that index and don't wa... See more...
Hello Team, I have a error data coming to index (we filtered to send only error logs to this index ), I wanted to create an alert when ever there is any new events coming to that index and don't want to send the duplicate alert.  index=error_idx sourcetype=error_srctyp
My inputs.conf from the deployment server (confirmed that it is being pushed to all hosts correctly): {WinEventLog://Security}   index = wineventlog sourcetype = WinEventLog:Security disa... See more...
My inputs.conf from the deployment server (confirmed that it is being pushed to all hosts correctly): {WinEventLog://Security}   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = EventCode="0-6000" blacklist = EventCode="1,2,3,4,"   Substituted other values for the blacklisted ones.  Despite being explicitly disallowed all host forwarders are still collecting and forwarding these events to the indexer.  Am I misconfiguring this?
Anyone know how to accomplish the Splunk equivalent of the following SQL?   SELECT * FROM (SELECT 'dev' AS env, 0 as value UNION SELECT 'beta' as env, 0 as value UNION SELECT 'prod' as env, 0 as va... See more...
Anyone know how to accomplish the Splunk equivalent of the following SQL?   SELECT * FROM (SELECT 'dev' AS env, 0 as value UNION SELECT 'beta' as env, 0 as value UNION SELECT 'prod' as env, 0 as value)   I intend to combine this arbitrary, literal dataset with another query, but I want to ensure that there are rows for 'dev', 'beta', and 'prod' whether or not Splunk is able to find any records for these environments. The reason for this is, I'm trying to create an alert that will trigger if a particular metric is NOT published often enough in Splunk for each of these environments.
Hi All, I am unable to see the logs for the source even after seeing the file is being tailed and read in internal logs. Can you please guide as to what could be wrong here?   I can see in interna... See more...
Hi All, I am unable to see the logs for the source even after seeing the file is being tailed and read in internal logs. Can you please guide as to what could be wrong here?   I can see in internal logs: INFO Metrics - group=per_source_thruput, series="log_source_path",  kbps=0.056, eps=0.193, kb=1.730, ev=6, avg_age=0.000, max_age=0   But I dont see the logs in Splunk, the recent logs are there in file in the host, other sources are also coming into splunk fine.      
Hi team, I need help to create a query with with 3 different threshold for 3 different event in single splunk alert. for example : index= abc sourcetype=xyz "warning" OR "Error" OR Criti... See more...
Hi team, I need help to create a query with with 3 different threshold for 3 different event in single splunk alert. for example : index= abc sourcetype=xyz "warning" OR "Error" OR Critical If any of these ("warning" OR "Error" OR Critical) occurred 5 times in events in last 15 minutes alert should be triggered .  
I am fairly new to the Splunk platform/ community; I am in learning mode and I hope to get some help here. How do I set up/configure an alert on a set of Windows Servers to notify me when a partic... See more...
I am fairly new to the Splunk platform/ community; I am in learning mode and I hope to get some help here. How do I set up/configure an alert on a set of Windows Servers to notify me when a particular set of services stops? For example, I have three services that start with the naming of TDB, how can I configure Splunk to alert if any of those services stop on a particular server name. Thanks much.
My search ends with:   | table Afdeling 20* Voorlaatste* Laatste* verschil   It has several detail rows and 1 row with totals. I want to use fillnull for the totals for the 20* columns (2023-10, ... See more...
My search ends with:   | table Afdeling 20* Voorlaatste* Laatste* verschil   It has several detail rows and 1 row with totals. I want to use fillnull for the totals for the 20* columns (2023-10, 2023-11 etc.) but not for Voorlaatste* Laatste* and verschil.  I can't use    | fillnull 20* value="0.0"   because that adds a column "20*" and I don't want to use fillnull 2023-10 etc. Is there a way to do this?
Hi All, How to exclude particular values of fields in this query.In my scenario if message having "file not found" so i dont want to show the transactions. below is the query i tried to exclude.   ... See more...
Hi All, How to exclude particular values of fields in this query.In my scenario if message having "file not found" so i dont want to show the transactions. below is the query i tried to exclude.   index=mulesoft environment=* applicationName IN ("processor","api")|where message!="No files found for*" | stats values(content.InterfaceName) as InterfaceName values(content.Error) as error values(message) as message values(priority) as priority min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY applicationName,correlationId | table Status InterfaceName applicationName Timestamp "Total Elapsed Time" FileList "SuccessFile/FailureFile" Response correlationId | search InterfaceName IN ("Test") And i tried | search NOT message IN ("No files found for*")    
Hello , Using the below query i am able to get title and Definition of macros . |rest /servicesNS/-/-/admin/macros |table title,definition Can this same be achievable using https://*****:8089/... See more...
Hello , Using the below query i am able to get title and Definition of macros . |rest /servicesNS/-/-/admin/macros |table title,definition Can this same be achievable using https://*****:8089/servicesNS/-/-/admin/macros?output_mode=json  postman call , that i will get only title and definition in response of an api call . i tried using filter  f, search as per the documentation but its not giving required response  Thanks In advance