All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi if I understand right, you are trying to configure HEC receiver on this node? Splunk UF doesn’t support HEC! When you want to use HEC, the instance must be full splunk enterprise like heavy forw... See more...
Hi if I understand right, you are trying to configure HEC receiver on this node? Splunk UF doesn’t support HEC! When you want to use HEC, the instance must be full splunk enterprise like heavy forwarder. r. Ismo
Hi @Josh1890 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I think this works, thanks
Sizing Splunk for ES use has many factors for performance and to consider, it’s not a one size fits all. As everything is a search, you need to have sufficient resources to cater for users and vari... See more...
Sizing Splunk for ES use has many factors for performance and to consider, it’s not a one size fits all. As everything is a search, you need to have sufficient resources to cater for users and various aspects of the Splunk environment alongside the ES functions. We go by a rule of thumb for ES sizing 100GB Per indexer, I have seen this higher in some cases (so the amount you’re ingesting per day), so try to understand much volume of data ingest comes into you Splunk per day. We typically dedicate the ES SH on its own for large environments, the reason is when data comes in Splunk it will also be placing that data to disk, being a provider of that data, searching the data, running datamodel searches for the correlations rules and there will be dashboards. On top of this you may have other users using the ES or ad-hoc searches - so you can see many aspects to consider (CPU/RAM/IO/Network), otherwise it can become slow and you don’t want that, you need results in a timely fashion. As guide its best for minimum for 16CPU/32G RAM for indexers and SH - as you have 32CPU/32RAM you should be ok as a starting point,  but that does depend on the workload. You also need to check that the disk is SSD and I/Ops is over 800 and ensure you are not sending large volumes of data per day so that your AIO AIO can’t handle all the functions - so keep a check on ingest per day. How to check for Correlation Searches resources consumption? I would start to use the monitoring console(MC) for the usage stats, it’s very comprehensive, it will show the load etc, you can see which searches are consuming memory and this will help you with some aspects of resources.the MC comes with Splunk so it should be on your AIO. - see my links below for refence. Some tips: Ensure you only ingest important data sources on boarded and they are CIM Complaint via the TA's Enable a few data models at time based on your use cases (Correlation rules you want to use) and keep monitoring via the MC checking the load overtime, this will help you keep on top of the resources. Here ‘s some further links on the topics I have mentioned that you should read. ES Performance Reference https://docs.splunk.com/Documentation/ES/7.3.1/Install/DeploymentPlanning MC Reference https://docs.splunk.com/Documentation/Splunk/9.2.1/DMC/DMCoverview Hardware Ref https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware
I'm running universalforwarder as a service in docker, here is my docker-compose config: services:       services: splunkuniversalforwarder: platform: "linux/amd64" hostname: splunk... See more...
I'm running universalforwarder as a service in docker, here is my docker-compose config: services:       services: splunkuniversalforwarder: platform: "linux/amd64" hostname: splunkuniversalforwarder image: splunk/universalforwarder:latest volumes: - opt-splunk-etc:/opt/splunk/etc - opt-splunk-var:/opt/splunk/var - ./splunk/splunkclouduf.spl:/tmp/splunkclouduf.spl ports: - "8000:8000" - "9997:9997" - "8088:8088" - "1514:1514" environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_USER=root - SPLUNK_ENABLE_LISTEN=9997 - SPLUNK_CMD="/opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl" - DEBUG=true - SPLUNK_PASSWORD=<root password> - SPLUNK_HEC_TOKEN=<HEC token> - SPLUNK_HEC_SSL=false​         I have a HTTP Event Collector configured in my Splunk Free Trial account.   When running docker-compose a lot of things seem to be going well and then i hit this:       TASK [splunk_universal_forwarder : Setup global HEC] *************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: POST/services/data/inputs/http/httpadmin********8089{'disabled': '0', 'enableSSL': '0', 'port': '8088', 'serverCert': '', 'sslPassword': ''}NoneNoneNone;;; AND excep_str: No Exception, failed with status code 404: {"text":"The requested URL was not found on this server.","code":404}         I can see no reference to POST/services/data/inputs/http/httpadmin in any Splunk docs Can anyone shed any light on this please? 
I don't know if it makes a difference but your fieldset is not terminated and your earliest and latest aren't referencing the timepicker token correctly.
Hi @tv00638481  following post help to understand stpes need to followed befroe upgrade.  https://community.splunk.com/t5/Installation/What-s-the-order-of-operations-for-upgrading-Splunk-Enterp... See more...
Hi @tv00638481  following post help to understand stpes need to followed befroe upgrade.  https://community.splunk.com/t5/Installation/What-s-the-order-of-operations-for-upgrading-Splunk-Enterprise/td-p/408003  in this case  you need to upgrade first deploymentserver then HF  and UF. compatbility between Splunk cloud and forwaders  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Service/SplunkCloudservice#Supported_forwarder_versions    ---- Regards, Sanjay Reddy ---- If this reply helps you, Karma would be appreciated
Hello Splunk community! I have started my journey with splunk one month ago and I am currently learning Splunk Enterprise Security.  I have a very specific question, I am planning to use about 10-... See more...
Hello Splunk community! I have started my journey with splunk one month ago and I am currently learning Splunk Enterprise Security.  I have a very specific question, I am planning to use about 10-15 correlation searches in my ES and I would like to know if I need to upscale my resources for my Splunk machine, which is ubuntu server 20.04 with 32 GB RAM, 32 vCPU, and 200 GB hard disk.  I am have all-in-one installation scenario because I am just learning the basics of Splunk at the moment, but I would like to know: How much resources do correlation searches in Splunk consume? How much RAM and CPU separately does one average correlation search consume in Splunk Enterprise Security? 
Hi @Ismail_BSA  you can use following restcall to find caluclated fields created by you  | rest splunk_server=local services/data/props/calcfields/ | search author = <yourid> | table attr... See more...
Hi @Ismail_BSA  you can use following restcall to find caluclated fields created by you  | rest splunk_server=local services/data/props/calcfields/ | search author = <yourid> | table attribute field.name eai:acl.app author eai:acl.sharing   ---- Regards, Sanjay Reddy ---- If this reply helps you, Karma would be appreciated
Hi @NathanAsh , did you tried the OVER clause in the chart command? index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_t... See more...
Hi @NathanAsh , did you tried the OVER clause in the chart command? index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_time."|".version | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app | rex field=latestVersion "[\|]+(?<version>.*)" | table app version env | chart values(version) OVER app BY env limit=0 | fillnull value="Not Deployed" for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Chart Ciao. Giuseppe
This won't work as you want   | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app    latest() function is based on the _time fie... See more...
This won't work as you want   | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app    latest() function is based on the _time field, so if you want Deployed_Data_time to be _time then you need to evaluate it   | eval _time=strptime(Deployed_Data_time,"%m/%d/%Y %H:%M")   but you also cannot do max(latestVersion) as that is simply doing a numeric comparison on the date, which is a string, so 4/16/2024 is LESS than 9/15/2023 - 4 is less than 9. If you ever want to do string based date comparisons,  you need them to be ISO8601, i.e. YYYY-MM-DD-HH:MM:SS So, using your example data, is this what you want? | makeresults format=csv data="Deployed_Data_time,env,app,version 4/16/2024 15:29,axe1,app1,v-228 4/16/2024 15:29,axe1,app1,v-228 9/15/2023 8:12,axe1,app1,v-131 9/15/2023 8:05,axe2,app1,v-120 9/12/2023 1:19,axe2,app1, v-128 4/16/2024 15:29,axe2,app2,v-628 4/16/2024 15:26,axe2,app2,v-626 9/15/2023 8:12,axe2,app2,v-531 9/15/2023 8:05,axe1,app2,v-530 9/12/2023 1:19,axe1,app2, v-528" | rex field=version "v-(?<v>\d+)" | stats max(v) AS version BY app env | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"
Hello Team, We have a requirement to support Protobuf data ingestion for Splunk Endpoint. Many customers have expressed interest in sending data to Splunk in Protobuf Messages and making it availabl... See more...
Hello Team, We have a requirement to support Protobuf data ingestion for Splunk Endpoint. Many customers have expressed interest in sending data to Splunk in Protobuf Messages and making it available for search. What's the input? https://github.com/open-telemetry/opentelemetry-proto/blob/v1.0.0/opentelemetry/proto/collector/logs/v1/logs_service.proto The input would be the ProtoBuf Message: ExportLogsServiceRequest unmarshalled proto [ resource:{attributes:{key:"cloud.provider" value:{string_value:"data"}} attributes:{key:"ew_id" value:{string_value:"3421"}} attributes:{key:"ip" value:{string_value:"0.1.0.1"}}} scope_logs:{log_records:{time_unix_nano:1714188733 observed_time_unix_nano:1714188733 severity_text:"FATAL" body:{string_value:"onOriginRequest%20error%20level%2065553GXK3l7A1TG7QNiNsif0M4eZ7RmimyGeSu8GfyjGQTmbxjOEpDktybtjuWpb"} attributes:{key:"requestId" value:{string_value:"123456 Fp5zWvbr2cdYaOgC2LmC7hEs2"}} attributes:{key:"custom" value:{string_value:"3421 LUl8ovNHb6jO9Ak"}} attributes:{key:"queueit" value:{string_value:"1.2.3 sWcAL"}} attributes:{key:"ds2custom_message" value:{string_value:"Splunk POC Request 3qE2lAUxf0iDyCcxeNZkra3gK"}} trace_id:"\xd3\xcd8\xd3m5\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4" span_id:"ӽ7\xd3m5\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4\xd7]u"}} ]   curl -k -vvv -H "Authorization: Splunk XXXXX" -H 'Content-Type: application/x-protobuf' 'https://prd-p-pwf16.splunkcloud.com:8088/services/collector' --data-binary @data How to ingest the probuf message?
Hi I have a vast data set with a sample as below. Need to group the data based on three columns latest timestamp data and get the fourth column value against the latest timestamp found for that grou... See more...
Hi I have a vast data set with a sample as below. Need to group the data based on three columns latest timestamp data and get the fourth column value against the latest timestamp found for that grouped data. Deployed_Data_time env app version 4/16/2024 15:29 axe1 app1 v-228 4/16/2024 15:29 axe1 app1 v-228 9/15/2023 8:12 axe1 app1 v-131 9/15/2023 8:05 axe2 app1 v-120 9/12/2023 1:19 axe2 app1  v-128 4/16/2024 15:29 axe2 app2 v-628 4/16/2024 15:26 axe2 app2 v-626 9/15/2023 8:12 axe2 app2 v-531 9/15/2023 8:05 axe1 app2 v-530 9/12/2023 1:19 axe1 app2  v-528   and I need the output as  app axe1 axe2 app1 v-228 v-120 app2 v-530 v-628   And I tried something as below but output is not as expected.   index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_time."|".version | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app | rex field=latestVersion "[\|]+(?<version>.*)" | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"    Please help me achieve this . Thanks 
In a dashboard showing diff data in a panel, but when we open the panel query using "open in search" its showing correctly.       <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <f... See more...
In a dashboard showing diff data in a panel, but when we open the panel query using "open in search" its showing correctly.       <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <row> <panel> <table> <search> <query> index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA" <query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentageRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host_ip> <colorPalette type="map">{"12.234.201.22":#53A051, "10.457.891.34":#53A051,"10.234.34.18":#53A051,"10.123.363.23":#53A051}</colorPalette> </format> <format type="color" field="local"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="FilesofDMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Files created localley on AMP"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="File sent to DMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Error"> <colorPalette type="map">{"Job didn't run today":#DC4E41}</colorPalette> </format> <format type="color" field="Host Data Details"> <colorPalette type="map">{"HOM-jjderf - 10.123.34.18":#53A051"HOM-iytgh - 10.123.363.23":#53A051, HOP-wghjy - 12.234.201.22":#53A051, "HOP-tyhgt - 12.234.891.34":#53A051}</colorPalette> </format> </table> </panel> </row> </form>       Panel displaying in dashboard: When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to DMA HOM-jjderf - 10.123.34.18 HOM-iytgh - 10.123.363.23 HOP-wghjy - 12.234.201.22 HOP-tyhgt - 12.234.891.34   221 86  
Hello, I recently encountered an issue with Splunk Cloud. After creating a new eval in the "Fields" menu under "calculated fields," named 'src' for the source type "my_source_type," I adjusted the p... See more...
Hello, I recently encountered an issue with Splunk Cloud. After creating a new eval in the "Fields" menu under "calculated fields," named 'src' for the source type "my_source_type," I adjusted the permissions to make it readable and writable for my role, with app permissions set to all apps. However, upon saving these permissions, the eval disappeared, and I couldn't locate it anywhere. Thinking it might not have saved properly, I attempted to recreate it with the same name and source type. However, when I tried to adjust the permissions, I received a red error banner stating: "Splunk could not update permissions for resource data/props/calcfields [HTTP 409] [{'type': 'ERROR', 'code': None, 'text': 'Cannot overwrite existing app object'}]" Any recommendations on where I should search to locate the initially created eval that seems to have gone missing? Thank you.
1. You can look for the source using metadata command | metadata type=sources or even | metadata type=sources index=your_index Alternatively you can use tstats | tstats count where index IN (som... See more...
1. You can look for the source using metadata command | metadata type=sources or even | metadata type=sources index=your_index Alternatively you can use tstats | tstats count where index IN (some, subset, of, your, indexes) source="your_source" by index 2. The data may not be findable due to a host of possible issues: a) The data is indexed outside of your search timerange due to either data itself or wrong timestamp recognition b) The configuration can be filtering/redirecting events to another index c) The data may be being sent to a non-existent index and you don't have last-resort index defined d) The source might be overwritten on ingestion.
Hi @Naa_Win , you have to define the frequency of your alert and run a simple search scheduled on the above frequency, if e.g. you want to run your alert every 5 minutes, you should run a search lik... See more...
Hi @Naa_Win , you have to define the frequency of your alert and run a simple search scheduled on the above frequency, if e.g. you want to run your alert every 5 minutes, you should run a search like the following: index=error_idx sourcetype=error_srctyp earliest=-5m@m latest=@m if you have events the alert triggers. choosing a defined period you are sure that the alert triggers only one time on events. Ciao. Giuseppe
It seems a bit like an overkill to use Splunk for this if all you send are errors. But anyway, you should just search for events with continuous scheduling and you're set (just take into account ... See more...
It seems a bit like an overkill to use Splunk for this if all you send are errors. But anyway, you should just search for events with continuous scheduling and you're set (just take into account possible delay in indexing).
Hello Team, I have a error data coming to index (we filtered to send only error logs to this index ), I wanted to create an alert when ever there is any new events coming to that index and don't wa... See more...
Hello Team, I have a error data coming to index (we filtered to send only error logs to this index ), I wanted to create an alert when ever there is any new events coming to that index and don't want to send the duplicate alert.  index=error_idx sourcetype=error_srctyp
Hi @isoutamo and @bowesmana, I have tried the ways shared by you but it still doesn't work it's like Splunk doesn't read the transforms.conf I checked the logs of the index=_internal but I don't s... See more...
Hi @isoutamo and @bowesmana, I have tried the ways shared by you but it still doesn't work it's like Splunk doesn't read the transforms.conf I checked the logs of the index=_internal but I don't see any errors related to it.