All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Silah , I saw your second messge only after my answer, plese try this: Let me understand: what's the value of status in Begin and End events? You have to check these conditions in the evals: ... See more...
Hi @Silah , I saw your second messge only after my answer, plese try this: Let me understand: what's the value of status in Begin and End events? You have to check these conditions in the evals: index=your_index status IN ("Begin", "End") | stats earliest(eval(if(status="Begin",_time,""))) AS Begin_time latest(eval(if(status="End",_time,""))) AS End_time BY UUID | eval diff=End_time-Begin_time | table UUID diff Ciao. Giuseppe
Hi @Silah , yes, it was a mistake! index=your_index status IN ("Begin", "End") | stats earliest(eval(status="Begin")) AS Begin_time latest(eval(status="End")) AS End_time BY UUID | e... See more...
Hi @Silah , yes, it was a mistake! index=your_index status IN ("Begin", "End") | stats earliest(eval(status="Begin")) AS Begin_time latest(eval(status="End")) AS End_time BY UUID | eval diff=End_time-Begin_time | table UUID diff anyway, you ha ve to separately check the two conditions (status="Begin" and status="End") to verify that you have in those events the status and UUID fields. You can also add to the final table command also the  Begin_time and End_time fields to see if they are present or not. Remember to use always quotes in the eval commands. Ciao. Giuseppe 
Hello! I have a dashboard with several visualization panels. One of these is linked to a search that pulls the Top 10 Source IPs by Log Activity. index="index_name" $token.source.address$ |fields so... See more...
Hello! I have a dashboard with several visualization panels. One of these is linked to a search that pulls the Top 10 Source IPs by Log Activity. index="index_name" $token.source.address$ |fields source_address |stats count by source_address |table source_address, count |rename source_address as "Source IP", count as "Count" |sort -Count | head 10   The token, $token.source.address$, is set by a text box on the dashboard for the bar visualization below. However, in addition to the correct value being shown, there are often other incorrect values shown as well.   There doesn't seem to be a pattern as to why this happens? Does anyone know why this may happen and how to correct it? Thanks!
Sorry I should have added that I tried listing the begin_time and end_time in the table also, and both values are simply "True" and not a time stamp
Hi @gcusello  Thank you, this gets me started. I assume that  | eval diff=End_time-Start_time  should actually be  | eval diff=End_time-Begin_time  as it is called Begin_time in the earliest ev... See more...
Hi @gcusello  Thank you, this gets me started. I assume that  | eval diff=End_time-Start_time  should actually be  | eval diff=End_time-Begin_time  as it is called Begin_time in the earliest eval of the Begin event in the Stats part It does sort of work, My search query is identifying 4000 events and the table lists out 2000 by their UUID, so it has accurately identified that there is a Begin and End pair for each UUID, however the "diff" field of the table is blank for all of them. When I check the field, the value of diff is "null".
Hi everyone, I was wondering if anyone had any suggestions on effective ways of pulling application data from Splunk Cloud into PowerBI Platform without using the Splunk ODBC driver? Our Business In... See more...
Hi everyone, I was wondering if anyone had any suggestions on effective ways of pulling application data from Splunk Cloud into PowerBI Platform without using the Splunk ODBC driver? Our Business Intelligence team is keen on enriching their data by integrating Splunk with PowerBI. We're aiming to ensure that this integration follows best practices and is both efficient and reliable.   Has anyone here successfully implemented this kind of integration? If so, could you share the approach you took, the tools or connectors you used, and any tips or challenges you encountered? Thanks in advance for your help! Patrick #powerbi #odbc #splunk #businessintelligence
Hi community, can anyone help me figure out the log which Get incorrect data after Update(both get and update will log the request and response). In my case, the data can be updated multiple times. I... See more...
Hi community, can anyone help me figure out the log which Get incorrect data after Update(both get and update will log the request and response). In my case, the data can be updated multiple times. I need to guarantee all the Get can get correct data. For example, there are 5 rows log: 1. Update A = 5, 2. Get A = 5, 3. Get A = 6, 4. Update A = 6,  5. Get A = 6; These logs are sorted based on time. Obviously the result obtained in the third row is incorrect, it should return A = 5. The sample data is like: id value time operation 124945912 FALSE 1718280482 get 124945938 FALSE 1718280373 get 124945938 FALSE 1718280373 update 124945938 null 1718280363 get 124945937 FALSE 1718280348 get 124945937 FALSE 1718280348 update 124945937 null 1718280337 get 124945936 FALSE 1718280330 get 124945936 FALSE 1718280330 update   Both id=124945937 and id=124945936 are correct since the obtained value after Update operation is same as Update value(false) even though the previous obtained value(null) which is before Update operation does not equal to Update value. Can ignore the Get operation if there is no Update operation before. Can anyone help? Thanks in advance^^
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splu... See more...
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splunk app I am using:  Splunk https://splunkbase.splunk.com/app/5518    So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt in splunk SPL. Is there a suitable Splunk query for this kind of purpose?
Hi @Silah , you could try to run something like this: index=your_index status IN (Begin, End) | stats earliest(eval(status="Begin")) AS Begin_time latest(eval(status="End")) AS End_time ... See more...
Hi @Silah , you could try to run something like this: index=your_index status IN (Begin, End) | stats earliest(eval(status="Begin")) AS Begin_time latest(eval(status="End")) AS End_time BY UUID | eval diff=End_time-Start_time | table UUID diff then you can manage the incomplete conditions: e.g. there's only one event (Start or End) Ciao. Giuseppe
Hi I am getting a log feed for a transactional system. Each log entry has a status either End, Begin or something in between (but for this I don't care about the in between) and a UUID to mark that ... See more...
Hi I am getting a log feed for a transactional system. Each log entry has a status either End, Begin or something in between (but for this I don't care about the in between) and a UUID to mark that they belong to the same transaction. I am struggling to write a search query that essentially subtracts the _time from the BEGIN entry ud UUID123, from the _time from the END entry with the same UUID. Obviously, my goal is to get the time it took the transaction to complete but I am not sure how to compare fields in two entries with the same UUID. Any ideas ? Thanks
Hi @AnanthaS , probably the issue is that the boolean AND operato must be in uppercase. then, don't use where after the main search, your search is slower! put all the search terms in the main sea... See more...
Hi @AnanthaS , probably the issue is that the boolean AND operato must be in uppercase. then, don't use where after the main search, your search is slower! put all the search terms in the main search index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" (eventStartsFrom <= now() AND eventEndsAt >= now()) If your search continue to not working, probably you haven't any event where you can find both the fields entStartsFrom and eventEndsAt and you have to group them using the stats command. Ciao. Giuseppe
I'm having the same exact issue as @AntonioJimenez and it is also a blocker for us.  Perhaps the author for this article might be able to help?
following query yields no results: index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventStartsFrom <= now() and eventEndsAt >= now() but index=shared... See more...
following query yields no results: index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventStartsFrom <= now() and eventEndsAt >= now() but index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventStartsFrom <= now() and index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventEndsAt >= now() both works individually. All comparisons are made against epoch date format. Can someone help me understand as what mistake I am doing here.
Thanks for the reply!! The stats i am looking for single windows servers. | timechart latest('CPU') by process_name host timechart followed by process_name host does not work
Thanks for the reply!! Mostly 4 to 8 Cores for Windows Servers..
Hi @bowesmana  What do you have in your real search before you do the eventstats as it will push all the data to the search head, including _raw, so unless you use the fields statement you will be... See more...
Hi @bowesmana  What do you have in your real search before you do the eventstats as it will push all the data to the search head, including _raw, so unless you use the fields statement you will be sending all the event data to the SH. >> Can you re-phrase your statement?  How do I improve efficiency using fields statement? My search using real data is using  table statement without "*", but it does have a lot of fields. You are also doing lots of multivalue splits, which is going to be pretty memory hungry on the SH. Which part of my search is using multivalue splits? What is the depth of the tree in your case, your example is 3 tier, going from server via the LB - if it's only 3 tier, then you could perhaps build your pathways just be fetching the name="LoadBalancer" objects and using stats values() rather than eventstats to create the lookup - as at that point you don't care about the IPs. The depth is always 3-tier:   Server->LB->network. Can you give an example using stats values to create a lookup?     I care about the IP since the one server can have multiple IPs on its interface. For example: Server-A can have 192.162.1.7 (int1) and 192.162.1.6 (int2) I appreciate your assistance. Thank you so much
The purpose of this query is to create legacy diagrams of how the search head works in Splunk. I want to know the internal flow of the search head so anyone can use it in a future LLD or flow diagram... See more...
The purpose of this query is to create legacy diagrams of how the search head works in Splunk. I want to know the internal flow of the search head so anyone can use it in a future LLD or flow diagram. 
Hello @bowesmana  The eval match condition worked, but it didn't give me the result I expected. Is it possible to use "eventstat match condition" to group the student based on partialname? D... See more...
Hello @bowesmana  The eval match condition worked, but it didn't give me the result I expected. Is it possible to use "eventstat match condition" to group the student based on partialname? Do you think moving to evenstat makes the search more efficient?  I appreciate your help. Thank you so much without "eventstat match condition" - it worked   | makeresults format=csv data="grade,name A,student-1-a A,student-1-b A,student-1-c A,student-2-a A,student-2-b A,student-2-c" | eval partialname=substr(name,0,9) | eventstats values(name) as student by partialname   with "eventstat match condition" - it didn't work   | makeresults format=csv data="grade,name A,student-1-a A,student-1-b A,student-1-c A,student-2-a A,student-2-b A,student-2-c" | eval partialname=substr(name,0,9) | eventstats values(eval(if(match(name,substr(name,0,9)), name, null()))) as student by grade   Data: class name class-1 student-1-a class-1 student-1-b class-1 student-1-c class-1 student-2-a class-1 student-2-b class-1 student-2-c Expected result grade name student A student-1-a student-1-a     student-1-b     student-1-c A student-1-b student-1-a     student-1-b     student-1-c A student-1-c student-1-a     student-1-b     student-1-c A student-2-a student-2-a     student-2-b     student-2-c A student-2-b student-2-a     student-2-b     student-2-c A student-2-c student-2-a     student-2-b     student-2-c Currently here's the result with eventstats match condition grade name partialname student A student-1-a student-1 student-1-a       student-1-b       student-1-c       student-2-a       student-2-b       student-2-c A student-1-b student-1 student-1-a       student-1-b       student-1-c       student-2-a       student-2-b       student-2-c A student-1-c student-1 student-1-a       student-1-b       student-1-c       student-2-a       student-2-b       student-2-c A student-2-a student-2 student-1-a       student-1-b       student-1-c       student-2-a       student-2-b       student-2-c A student-2-b student-2 student-1-a       student-1-b       student-1-c       student-2-a       student-2-b       student-2-c A student-2-c student-2 student-1-a       student-1-b       student-1-c       student-2-a       student-2-b       student-2-c
Regular expressions (RegEx) are powerful tools for splitting data based on patterns. dish tv billing issues Use split() with a RegEx pattern to segment text into manageable components, such as dividi... See more...
Regular expressions (RegEx) are powerful tools for splitting data based on patterns. dish tv billing issues Use split() with a RegEx pattern to segment text into manageable components, such as dividing a string by commas or spaces. For instance, split(/[,\s]+/). Customize patterns to match specific delimiters or structures in data, ensuring accurate segmentation for tasks like parsing CSV files or extracting structured information from unformatted text.
Hi @vijreddy30 , as I said, use the Add Data feature to define the correct sourcetype. Also because, viewing your screenshout, you have a timestamp at 6/12/24 0:37:54 and the Date column at 04/06/2... See more...
Hi @vijreddy30 , as I said, use the Add Data feature to define the correct sourcetype. Also because, viewing your screenshout, you have a timestamp at 6/12/24 0:37:54 and the Date column at 04/06/2024 and the Time column at 10:48:00. In Add Data, you can configure and test the timestamp and the line breaking. Ciao. Giuseppe