All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@splunklearner  Proxy Server: A proxy server acts as an intermediary between your Splunk instance and an external service (like Akamai’s log delivery endpoints). It forwards requests from your inter... See more...
@splunklearner  Proxy Server: A proxy server acts as an intermediary between your Splunk instance and an external service (like Akamai’s log delivery endpoints). It forwards requests from your internal network to the internet and relays responses back. This is critical in your case since your Splunk instances lack direct internet access. Proxy Host: This is the specific hostname or IP address of the proxy server that Splunk will use to route its outbound traffic. Since your Splunk instances are internal and not internet-facing, you’ll need a proxy to enable communication with Akamai’s services (e.g., to pull logs via API or receive them via HTTP Event Collector if configured that way). Additionally, with instances refreshing every 45 days, you’ll need a solution that’s consistent across refreshes.  
| streamstats dc(Item) as ID | eventstats min(ID) as ID by Item
@gcusello Thanks sir. I will do same. 
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and in... See more...
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and instances are refreshed every 45 days due to compliance and these are not exposed to internet (internal). How to create and configure proxy server here? Please guide me This is the app installed - https://splunkbase.splunk.com/app/4310
Hi @avi123 , could you share the search and the field names you're using ? Ciao. Giuseppe
Hi @Poojitha , there is no reason for this behavior. If you can, open a ticket to Splunk Support. Ciao. Giuseppe
Hi All, I have a splunk query giving results in this format: Time                                                             Event 3/10/25 10:52:15.000 AM                 { [-]                ... See more...
Hi All, I have a splunk query giving results in this format: Time                                                             Event 3/10/25 10:52:15.000 AM                 { [-]                                                                          BCDA_AB_CD_01: 1                                                                          BCAD_AB__02: 0                                                                          BCDA_AB_DC: 1                                                                          BCAD_CD_02: 0                                                                         } However I want to remove the BCAD_AB__02 and BCAD_CD_02 from the output. Please help me write a splunk query to exclude these two values from the output. I tried doing  | fields - BCAD_AB__02 BCAD_CD_02  but this didn't work
Hi @dataisbeautiful , ok, please try this: index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | join type=left Item [ search index=transactionlog sourcetype=transaction earl... See more...
Hi @dataisbeautiful , ok, please try this: index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | join type=left Item [ search index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | stats count BY Item | eval counter=1 | accum counter AS ID | table Item ID ] | table _time Item Count ID Ciao. Giuseppe
    @gcusello  yes sir, I tried.  I clicked on lvl --> Info value. It is getting filtered as  lvl=Info but now no result though there is result for lvl="Info"
Hi @gcusello  Thanks for your suggestion, I've run this   index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | eval counter=1 | accum counter AS ID | fields - counter | table _ti... See more...
Hi @gcusello  Thanks for your suggestion, I've run this   index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | eval counter=1 | accum counter AS ID | fields - counter | table _time Item Count ID   It gives the output _time Item Count ID ... Apple 8 1 ... Banana 2 2 ... Apple 5 3 ... Coconut 1 4 ... Banana 2 5   This isn't what I'm after as Apple doesn't have a single ID now. I'd like a single ID per unique value of Item, not a row counter ID. Hope that makes sence.
@charlottelimcl  Check this :- I have used makeresults command for dummy.  | makeresults | eval _raw=" _time,ComputerName,Account_Name,EventCode,Object_Name,Process_Name 2023-10-27 10:00:00,PC1... See more...
@charlottelimcl  Check this :- I have used makeresults command for dummy.  | makeresults | eval _raw=" _time,ComputerName,Account_Name,EventCode,Object_Name,Process_Name 2023-10-27 10:00:00,PC1,user1,4688,,/path/to/parent.exe 2023-10-27 10:00:01,PC1,user1,4663,/path/to/hello.exe,/path/to/welcome.exe 2023-10-27 10:01:00,PC2,user2,4688,,/path/to/another.exe 2023-10-27 10:01:02,PC2,user2,4663,/path/to/goodbye.exe,/path/to/start.exe 2023-10-27 10:02:00,PC3,user3,4688,,/path/to/third.exe 2023-10-27 10:02:03,PC3,user3,4663,/path/to/final.exe,/path/to/launch.exe " | multikv forceheader=1 | eval _time=strptime(_time,"%Y-%m-%d %H:%M:%S") | stats earliest(_time) AS _time values(ComputerName) AS ComputerName values(eval(if(EventCode=4663, Process_Name, ""))) AS New_Process_Name values(eval(if(EventCode=4688, Process_Name, ""))) AS Initiating_Process_Name values(eval(if(EventCode=4663, Object_Name, ""))) AS Object_Name BY Account_Name | table _time ComputerName Account_Name New_Process_Name Initiating_Process_Name Object_Name   In this example: makeresults generates dummy events. eval creates the raw data with the necessary fields. multikv parses the raw data into individual fields. stats aggregates the data as per your requirements.
Hi @dataisbeautiful , in other words, you need to add a progressive number to your results, is it correct? if this is your requirement, please try this: index=transactionlog sourcetype=transaction... See more...
Hi @dataisbeautiful , in other words, you need to add a progressive number to your results, is it correct? if this is your requirement, please try this: index=transactionlog sourcetype=transaction earliest=-1h@h latest=@h | eval counter=1 | accum counter AS ID | fields - counter | table _time Item Count ID Ciao, Giuseppe
@gcuselloI've updated my post with the base search and raw data @meetmshahthe ID is not in the raw data, it is something I am adding only at time of search I'm not interested in maintaining an ID b... See more...
@gcuselloI've updated my post with the base search and raw data @meetmshahthe ID is not in the raw data, it is something I am adding only at time of search I'm not interested in maintaining an ID between times, it's going to be used in a visualisation on a deashboard.    
@NoSpaces  Ensure that both searches (dashboard and manual) are using the same time range. Check the time picker settings in the dashboard. The default time range in a dashboard might be different... See more...
@NoSpaces  Ensure that both searches (dashboard and manual) are using the same time range. Check the time picker settings in the dashboard. The default time range in a dashboard might be different from the one you used in the search bar. If you have multiple panels, ensure that they are all using the same base search. Sometimes, panels might be referencing different searches, leading to inconsistencies
Hi @dataisbeautiful , could you share your search and a sample of your data (both using "Add/Edit Code Sample" button not a screenshot)? Ciao. Giuseppe
Hi @Poojitha , try to click on the value you want for lv1 using the interesting fields panel and see how it displays this filter. Ciao. Giuseppe
@thanh_on  First, confirm the actual memory available on the server hosting the Search Head. If this is a virtual machine (VM), check the hypervisor (e.g., VMware, Hyper-V) to ensure it’s allocated ... See more...
@thanh_on  First, confirm the actual memory available on the server hosting the Search Head. If this is a virtual machine (VM), check the hypervisor (e.g., VMware, Hyper-V) to ensure it’s allocated 16GB. For a physical server or VM, log into the operating system and run a command like free -h (Linux) or systeminfo | find "Total Physical Memory" (Windows) to verify the OS sees 16GB. If the OS only reports 4GB, the issue is likely with the server configuration or VM settings, not Splunk fix that first by adjusting the VM allocation or checking hardware.
Hi @charlottelimcl , about Object_Name, please use this: index=wineventlog source=wineventlog:security (EventCode=4688 OR (EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe")) | st... See more...
Hi @charlottelimcl , about Object_Name, please use this: index=wineventlog source=wineventlog:security (EventCode=4688 OR (EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe")) | stats earliest(_time) AS _time values(ComputerName) AS ComputerName values(Object_Name) AS Object_Name values(eval(if(EventCode=4663,Process_Name,"") AS New_Process_Name values(eval(if(EventCode=4688,Process_Name,"") AS Initiating_Process_Name BY Account_name About the time occurring for the execution, this is the more performant way to create a search, if you try with join you'll have a more longer time for the execution. To optimize the search, you should try some acceleration method https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Aboutsummaryindexing or https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Acceleratetables  or use a Data Model https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Aboutdatamodels Ciao. Giuseppe
Hello everyone! I came across a strange behavior. I was building a dashboard and noticed that some results look unexpected. The results are presented at the top of the screenshot. On the last row... See more...
Hello everyone! I came across a strange behavior. I was building a dashboard and noticed that some results look unexpected. The results are presented at the top of the screenshot. On the last row, you can see that ProvDuration is 0. Also, StartTime and EndTime are equal. Moreover, other fields are also equal, and it's illogical due to the search specifics. As you can see, StartTime and EndTime represent the min and max values of the _time field.   index="hrz" (sourcetype="hrz_file_log" AND "*is provisioning") OR (sourcetype="hrz_file_syslog" AND EventType="AGENT_STARTUP") | rex field=_raw "VM\s+(?<MachineName>.*)$" | table _time, PoolId, MachineName, _raw | transaction MachineName startswith="Pool" endswith="startup" maxevents=2 keeporphans=false | search (PoolId="*") (MachineName="*") | search duration<=700 | stats min(duration) AS DurationMin, avg(duration) AS DurationAvg, max(duration) AS DurationMax, min(_time) AS StartTime, max(_time) AS EndTime BY PoolId | eval DurationMin = round(DurationMin, 2) | eval DurationAvg = round(DurationAvg, 2) | eval DurationMax = round(DurationMax, 2) | eval ProvDuration = round((EndTime - StartTime), 2) | eval StartTime = strftime(StartTime, "%Y-%m-%d %H:%M:%S.%3Q") | eval EndTime = strftime(EndTime, "%Y-%m-%d %H:%M:%S.%3Q") | table PoolId, DurationMin, DurationAvg, DurationMax, ProvDuration, StartTime EndTime   I decided to dig deeper and try to analyze the search more carefully. After I moved to the search through the dashboard, I found that the search results look different. The last row looks as it should be. You can see these results at the bottom of the screenshot. What could be wrong with my search, and what am I missing?
Hello @dataisbeautiful, You can just add the other field name afte the by clause? can you give the current search which you are using and confirm if the ID field that you want to add is in the event... See more...
Hello @dataisbeautiful, You can just add the other field name afte the by clause? can you give the current search which you are using and confirm if the ID field that you want to add is in the events itself?