All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Ismail_BSA  you can use following restcall to find caluclated fields created by you  | rest splunk_server=local services/data/props/calcfields/ | search author = <yourid> | table attr... See more...
Hi @Ismail_BSA  you can use following restcall to find caluclated fields created by you  | rest splunk_server=local services/data/props/calcfields/ | search author = <yourid> | table attribute field.name eai:acl.app author eai:acl.sharing   ---- Regards, Sanjay Reddy ---- If this reply helps you, Karma would be appreciated
Hi @NathanAsh , did you tried the OVER clause in the chart command? index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_t... See more...
Hi @NathanAsh , did you tried the OVER clause in the chart command? index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_time."|".version | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app | rex field=latestVersion "[\|]+(?<version>.*)" | table app version env | chart values(version) OVER app BY env limit=0 | fillnull value="Not Deployed" for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Chart Ciao. Giuseppe
This won't work as you want   | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app    latest() function is based on the _time fie... See more...
This won't work as you want   | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app    latest() function is based on the _time field, so if you want Deployed_Data_time to be _time then you need to evaluate it   | eval _time=strptime(Deployed_Data_time,"%m/%d/%Y %H:%M")   but you also cannot do max(latestVersion) as that is simply doing a numeric comparison on the date, which is a string, so 4/16/2024 is LESS than 9/15/2023 - 4 is less than 9. If you ever want to do string based date comparisons,  you need them to be ISO8601, i.e. YYYY-MM-DD-HH:MM:SS So, using your example data, is this what you want? | makeresults format=csv data="Deployed_Data_time,env,app,version 4/16/2024 15:29,axe1,app1,v-228 4/16/2024 15:29,axe1,app1,v-228 9/15/2023 8:12,axe1,app1,v-131 9/15/2023 8:05,axe2,app1,v-120 9/12/2023 1:19,axe2,app1, v-128 4/16/2024 15:29,axe2,app2,v-628 4/16/2024 15:26,axe2,app2,v-626 9/15/2023 8:12,axe2,app2,v-531 9/15/2023 8:05,axe1,app2,v-530 9/12/2023 1:19,axe1,app2, v-528" | rex field=version "v-(?<v>\d+)" | stats max(v) AS version BY app env | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"
Hello Team, We have a requirement to support Protobuf data ingestion for Splunk Endpoint. Many customers have expressed interest in sending data to Splunk in Protobuf Messages and making it availabl... See more...
Hello Team, We have a requirement to support Protobuf data ingestion for Splunk Endpoint. Many customers have expressed interest in sending data to Splunk in Protobuf Messages and making it available for search. What's the input? https://github.com/open-telemetry/opentelemetry-proto/blob/v1.0.0/opentelemetry/proto/collector/logs/v1/logs_service.proto The input would be the ProtoBuf Message: ExportLogsServiceRequest unmarshalled proto [ resource:{attributes:{key:"cloud.provider" value:{string_value:"data"}} attributes:{key:"ew_id" value:{string_value:"3421"}} attributes:{key:"ip" value:{string_value:"0.1.0.1"}}} scope_logs:{log_records:{time_unix_nano:1714188733 observed_time_unix_nano:1714188733 severity_text:"FATAL" body:{string_value:"onOriginRequest%20error%20level%2065553GXK3l7A1TG7QNiNsif0M4eZ7RmimyGeSu8GfyjGQTmbxjOEpDktybtjuWpb"} attributes:{key:"requestId" value:{string_value:"123456 Fp5zWvbr2cdYaOgC2LmC7hEs2"}} attributes:{key:"custom" value:{string_value:"3421 LUl8ovNHb6jO9Ak"}} attributes:{key:"queueit" value:{string_value:"1.2.3 sWcAL"}} attributes:{key:"ds2custom_message" value:{string_value:"Splunk POC Request 3qE2lAUxf0iDyCcxeNZkra3gK"}} trace_id:"\xd3\xcd8\xd3m5\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4" span_id:"ӽ7\xd3m5\xd3M4\xd3M4\xd3M4\xd3M4\xd3M4\xd7]u"}} ]   curl -k -vvv -H "Authorization: Splunk XXXXX" -H 'Content-Type: application/x-protobuf' 'https://prd-p-pwf16.splunkcloud.com:8088/services/collector' --data-binary @data How to ingest the probuf message?
Hi I have a vast data set with a sample as below. Need to group the data based on three columns latest timestamp data and get the fourth column value against the latest timestamp found for that grou... See more...
Hi I have a vast data set with a sample as below. Need to group the data based on three columns latest timestamp data and get the fourth column value against the latest timestamp found for that grouped data. Deployed_Data_time env app version 4/16/2024 15:29 axe1 app1 v-228 4/16/2024 15:29 axe1 app1 v-228 9/15/2023 8:12 axe1 app1 v-131 9/15/2023 8:05 axe2 app1 v-120 9/12/2023 1:19 axe2 app1  v-128 4/16/2024 15:29 axe2 app2 v-628 4/16/2024 15:26 axe2 app2 v-626 9/15/2023 8:12 axe2 app2 v-531 9/15/2023 8:05 axe1 app2 v-530 9/12/2023 1:19 axe1 app2  v-528   and I need the output as  app axe1 axe2 app1 v-228 v-120 app2 v-530 v-628   And I tried something as below but output is not as expected.   index=*.log source=*Report* | eval latestDeployed_version=Deployed_Data_time."|".version | eval latestVersion=Deployed_Data_time."|".version | stats latest(Deployed_Data_time) AS Deployed_Data_time values(env) AS env max(latestVersion) AS latestVersion BY app | rex field=latestVersion "[\|]+(?<version>.*)" | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"    Please help me achieve this . Thanks 
In a dashboard showing diff data in a panel, but when we open the panel query using "open in search" its showing correctly.       <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <f... See more...
In a dashboard showing diff data in a panel, but when we open the panel query using "open in search" its showing correctly.       <form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <row> <panel> <table> <search> <query> index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA" <query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentageRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host_ip> <colorPalette type="map">{"12.234.201.22":#53A051, "10.457.891.34":#53A051,"10.234.34.18":#53A051,"10.123.363.23":#53A051}</colorPalette> </format> <format type="color" field="local"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="FilesofDMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Files created localley on AMP"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="File sent to DMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Error"> <colorPalette type="map">{"Job didn't run today":#DC4E41}</colorPalette> </format> <format type="color" field="Host Data Details"> <colorPalette type="map">{"HOM-jjderf - 10.123.34.18":#53A051"HOM-iytgh - 10.123.363.23":#53A051, HOP-wghjy - 12.234.201.22":#53A051, "HOP-tyhgt - 12.234.891.34":#53A051}</colorPalette> </format> </table> </panel> </row> </form>       Panel displaying in dashboard: When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to DMA HOM-jjderf - 10.123.34.18 HOM-iytgh - 10.123.363.23 HOP-wghjy - 12.234.201.22 HOP-tyhgt - 12.234.891.34   221 86  
Hello, I recently encountered an issue with Splunk Cloud. After creating a new eval in the "Fields" menu under "calculated fields," named 'src' for the source type "my_source_type," I adjusted the p... See more...
Hello, I recently encountered an issue with Splunk Cloud. After creating a new eval in the "Fields" menu under "calculated fields," named 'src' for the source type "my_source_type," I adjusted the permissions to make it readable and writable for my role, with app permissions set to all apps. However, upon saving these permissions, the eval disappeared, and I couldn't locate it anywhere. Thinking it might not have saved properly, I attempted to recreate it with the same name and source type. However, when I tried to adjust the permissions, I received a red error banner stating: "Splunk could not update permissions for resource data/props/calcfields [HTTP 409] [{'type': 'ERROR', 'code': None, 'text': 'Cannot overwrite existing app object'}]" Any recommendations on where I should search to locate the initially created eval that seems to have gone missing? Thank you.
1. You can look for the source using metadata command | metadata type=sources or even | metadata type=sources index=your_index Alternatively you can use tstats | tstats count where index IN (som... See more...
1. You can look for the source using metadata command | metadata type=sources or even | metadata type=sources index=your_index Alternatively you can use tstats | tstats count where index IN (some, subset, of, your, indexes) source="your_source" by index 2. The data may not be findable due to a host of possible issues: a) The data is indexed outside of your search timerange due to either data itself or wrong timestamp recognition b) The configuration can be filtering/redirecting events to another index c) The data may be being sent to a non-existent index and you don't have last-resort index defined d) The source might be overwritten on ingestion.
Hi @Naa_Win , you have to define the frequency of your alert and run a simple search scheduled on the above frequency, if e.g. you want to run your alert every 5 minutes, you should run a search lik... See more...
Hi @Naa_Win , you have to define the frequency of your alert and run a simple search scheduled on the above frequency, if e.g. you want to run your alert every 5 minutes, you should run a search like the following: index=error_idx sourcetype=error_srctyp earliest=-5m@m latest=@m if you have events the alert triggers. choosing a defined period you are sure that the alert triggers only one time on events. Ciao. Giuseppe
It seems a bit like an overkill to use Splunk for this if all you send are errors. But anyway, you should just search for events with continuous scheduling and you're set (just take into account ... See more...
It seems a bit like an overkill to use Splunk for this if all you send are errors. But anyway, you should just search for events with continuous scheduling and you're set (just take into account possible delay in indexing).
Hello Team, I have a error data coming to index (we filtered to send only error logs to this index ), I wanted to create an alert when ever there is any new events coming to that index and don't wa... See more...
Hello Team, I have a error data coming to index (we filtered to send only error logs to this index ), I wanted to create an alert when ever there is any new events coming to that index and don't want to send the duplicate alert.  index=error_idx sourcetype=error_srctyp
Hi @isoutamo and @bowesmana, I have tried the ways shared by you but it still doesn't work it's like Splunk doesn't read the transforms.conf I checked the logs of the index=_internal but I don't s... See more...
Hi @isoutamo and @bowesmana, I have tried the ways shared by you but it still doesn't work it's like Splunk doesn't read the transforms.conf I checked the logs of the index=_internal but I don't see any errors related to it.
@richgalloway well it goes to specific index, but I have also tried the below and I dont see the source or the events: index=* host=abc | stats values(source) index=* source=log_source_path
Hi @Phillip.Montgomery, thank you for coming back and confirming! I'm glad to hear that Mario's suggestion worked. 
As @deepakc already pointed out - you can't find something that isn't there so unless some external source reports those events to Splunk, Splunk doesn't know about it. While you might try to set up... See more...
As @deepakc already pointed out - you can't find something that isn't there so unless some external source reports those events to Splunk, Splunk doesn't know about it. While you might try to set up some forms of auditing in Windows alone you'll typically end up with either too little information or too much (you can of course even set up procmon to run all the time and try to ingest its output but that's... not very convenient). And that's why you end up paying big bucks for DLP systems (which can have the nice feature of enforcing policy, not just detecting when someone violates it).
Yes, you are. white/blacklist has two options. 1. You explicitly list (dis)allowed event codes blacklist1=17,234,4762-4767 2. You specify key=regex to match (caveat - doesn't work with xml render... See more...
Yes, you are. white/blacklist has two options. 1. You explicitly list (dis)allowed event codes blacklist1=17,234,4762-4767 2. You specify key=regex to match (caveat - doesn't work with xml rendered events; in this case you need another setting) blacklist1 = EventCode=%47..% You tried to use the second option to do the first one.
Try setting it like this: [WinEventLog://Security] index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 0-6000 blacklist = 1,2,3,4
The /export endpoint will dispatch a search and then retrieve the results when the search is completed. If the search takes a lot of time, then likely the request will time out. You can either make y... See more...
The /export endpoint will dispatch a search and then retrieve the results when the search is completed. If the search takes a lot of time, then likely the request will time out. You can either make your search faster or you can use two endpoints, one where you dispatch the search and another endpoint where you later retrieve the results. To dispatch the search: curl -k -H 'Authorization: Splunk <your_token_here>' https://your_searchhead_here:8089/services/search/jobs -d search="search index=* | head 10 | table host" The above call will return you a search id (sid), which you'll need in the following call to retrieve the results: curl -k -H 'Authorization: Splunk <your_token_here>' https://your_searchhead_here:8089/services/search/<yoursidhere>/results Ref: https://docs.splunk.com/Documentation/Splunk/latest/RESTTUT/RESTsearches
Just now accepted the solution, I didn't see a notification that it was answered, sorry.
That works, I did a search for log4j.xml and found the file.  Thank you.