All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello , Thank you for responce. In fact, file content are mixed-syntax. some of lines are json format and log-info-type format.   2024-02-08 | 23.118 | <hostname> | DEBUG | QueryForSuccess ... See more...
Hi @gcusello , Thank you for responce. In fact, file content are mixed-syntax. some of lines are json format and log-info-type format.   2024-02-08 | 23.118 | <hostname> | DEBUG | QueryForSuccess    we run the specify content with different search string . I agree defining SEDCMD is not easy. Any other way where we can prevent unused data and indexed only wanted data.?
Folks, I'm new to Splunk but learning. However I've been stuck and i need help with a simple query and Dash board i think.  1. Im able to create a simple xml query with a dashboard that list a numbe... See more...
Folks, I'm new to Splunk but learning. However I've been stuck and i need help with a simple query and Dash board i think.  1. Im able to create a simple xml query with a dashboard that list a number of users doing what from an indexed log file. Works fine. Example server.log  and query sample index=Test* "`Users`"  2. I have one dataset csv file that contain server names and cluster that uploaded into my space.   Now how do i combine the query and create a dashboard from my dataset file and server log.  That will include the User info in the indexed server logs and include the server and cluster info in the dataset csv file.  Please advice  
HI, I need to know how to set and where the value of allow_skew for the Enterprise Security app, as I have many alerts triggering every 5 minutes. thank you.
Hi @dhirendra761, it's possible to truncate a log event defining the lenght of each event, but, having a json format, in this way you loose the json format and the choice to use spath command to ext... See more...
Hi @dhirendra761, it's possible to truncate a log event defining the lenght of each event, but, having a json format, in this way you loose the json format and the choice to use spath command to extract fields, so you have to manually extract all the fields, so I hint to avoid. Maybe (I'm not sure) it's possible to identify a part of the log event that can be removed (using the SEDCMD command in props.conf) maintaining the json structure, but it isn't so easy!  Ciao. Giuseppe
I'm collecting papercut logs from a window server. [monitor://C:\Program Files\PaperCut MF\server\logs\print-logs\printlog_*.log] disable=false the output and index are applied via a deployment se... See more...
I'm collecting papercut logs from a window server. [monitor://C:\Program Files\PaperCut MF\server\logs\print-logs\printlog_*.log] disable=false the output and index are applied via a deployment server. searching with index=* host=<hostname> splunkforwarder service account has read on the folder and children.  
Hi Splunker ~~ I try to set up a markdown in text , and use the gui to modify the color or change the layer up or down.  but, no effect  ,only change the json content , then works. the interesting... See more...
Hi Splunker ~~ I try to set up a markdown in text , and use the gui to modify the color or change the layer up or down.  but, no effect  ,only change the json content , then works. the interesting thing , Same version on the other server , it's works.  any suggestion ??   any expert can help  
Hi Team, While running the query I'm able see this error. but how to overcome this I have tried with spath command, but it does not work. I have attached screen shot for the same. Please could you... See more...
Hi Team, While running the query I'm able see this error. but how to overcome this I have tried with spath command, but it does not work. I have attached screen shot for the same. Please could you help on this asap.   Thanks Advance 
Hi, We are monitoring whole file in index. As file is in huge in size. which indexed all the content of files. But we require only specific part of files to be indexed. SAMPLE DATA: {"quiz": { "s... See more...
Hi, We are monitoring whole file in index. As file is in huge in size. which indexed all the content of files. But we require only specific part of files to be indexed. SAMPLE DATA: {"quiz": { "sport": { "q1": { "question": "Which one is correct team name in NBA?", "options": [ "New York Bulls", "Los Angeles Kings", "Golden State Warriros", "Huston Rocket" ], "answer": "Huston Rocket" } }, "maths": { "q1": { "question": "5 + 7 = ?", "options": [ "10", "11", "12", "13" ], "answer": "12" }, "q2": { "question": "12 - 8 = ?", "options": [ "1", "2", "3", "4" ], "answer": "4" } } } }   Sample SPL:   index="test" "answer"|<further spl> How to indexed partial data of file for answer string, Not to be indexed whole file. Thank you in advance for your help! 
Any luck with this?  I am having the same problem trying to post to Mattermost. I believe it to be a payload format problem.  
it will be best if I have all the info that is in the forwarder query - the type of forwarder, the average KB/s, the os, the IP, the splunk version but the index as well as we'd like to create a deta... See more...
it will be best if I have all the info that is in the forwarder query - the type of forwarder, the average KB/s, the os, the IP, the splunk version but the index as well as we'd like to create a detailed report which will be in help when moving to cloud. 
This is an individual usecase, i.e. it depends on your hours of business and holiday dates. If I were to approach it, I would take the total time difference and subtract evening and morning out of h... See more...
This is an individual usecase, i.e. it depends on your hours of business and holiday dates. If I were to approach it, I would take the total time difference and subtract evening and morning out of hours for each difference in days, then subtract working hours for each each weekend day and holiday date between the start date and the end date. I would use a lookup file for the holiday dates, with every holiday date you want to consider, each date have another field with a flag in. I would work out what the dates were by creating a multi-value field with all the intervening dates in. The lookup could then retrieve all the holiday flags and therefore be able to work out how many hours to deduct from the duration.
Hi @hazardoom, this search gives you different information, what do you reaaly need? tho know the hosts that are sending in each index? if this is your requirement, you can use my previous search. ... See more...
Hi @hazardoom, this search gives you different information, what do you reaaly need? tho know the hosts that are sending in each index? if this is your requirement, you can use my previous search. Ciao. Giuseppe
Hi @anandhalagaras1, you have to associate SHOULD_LINEMERGE = false to the sourcetype of your data in the UFs and in the Splunk Cloud Search Heads. Ciao. Giuseppe
Hi @dorHerbesman, try something like this: index=myidnex sourcetype=mysourcetype source=mysource | stats count BY TABLEQ | append [ | inputlookup your_lookup | eval count=0 | rename tableq A... See more...
Hi @dorHerbesman, try something like this: index=myidnex sourcetype=mysourcetype source=mysource | stats count BY TABLEQ | append [ | inputlookup your_lookup | eval count=0 | rename tableq AS TABLEQ | fields TABLEQ count ] | stats sum(count) AS total BY TABLEQ | where total=0 Ciao. Giuseppe
Sorry, I am not a security expert.
@dorHerbesman I don't see the lookup command in your search.  index=elbit_hr sourcetype=synerionDB source=retromng  | table ACCUM_CODE LOCK_CODE PERIOD_KEY TABLEQ UPD_DATE UPD_TIME USER_NAME
@dorHerbesman Hi, you must upload the lookup first, then use the |inputlookup tableq_lookyp to check. Upon successfully viewing your lookup data, you can access Splunk by using the lookup command. An... See more...
@dorHerbesman Hi, you must upload the lookup first, then use the |inputlookup tableq_lookyp to check. Upon successfully viewing your lookup data, you can access Splunk by using the lookup command. And the only way to find out is to use your initial search query along with the lookup command. Kindly review the Splunk documents listed below for your reference. lookup - Splunk Documentation 
@gcusello Yes i have updated the props.conf in the UF of the server. Since I don't have access to the Indexers it didnt worked. Since our Search head are hosted in Cloud and managed by Splunk Support... See more...
@gcusello Yes i have updated the props.conf in the UF of the server. Since I don't have access to the Indexers it didnt worked. Since our Search head are hosted in Cloud and managed by Splunk Support. So what should i need to do if i need to apply to Indexers directly.
Hey, im trying to do something relative easy and for some reason can't make it.. i have a lookup named tableq_lookyp with only one column tableq with the values: 1,2,4,5,7,8,10,11,12,13,14,15,16,20... See more...
Hey, im trying to do something relative easy and for some reason can't make it.. i have a lookup named tableq_lookyp with only one column tableq with the values: 1,2,4,5,7,8,10,11,12,13,14,15,16,20,21,22 (each value is different row) and i have this search: index=myidnex sourcetype=mysourcetype source=mysource | table ACCUM_CODE LOCK_CODE PERIOD_KEY TABLEQ UPD_DATE UPD_TIME USER_NAME i want to check if all of the values from the tableq lookup exists in my search  so i should get 16 rows (as the amount of different values in tableq) and a new column with yes/no options that tell me if the value appear in the search/lookup or not  what is the best way of doing it?   thanks !
Hi Giuseppe, Thanks for the fast response. Is it possible if I can recreate the search from the monitoring console for forwarder instance and use it somehow to connect it to each index?    `dmc_ge... See more...
Hi Giuseppe, Thanks for the fast response. Is it possible if I can recreate the search from the monitoring console for forwarder instance and use it somehow to connect it to each index?    `dmc_get_forwarder_tcpin` hostname=* | eval source_uri = hostname.":".sourcePort | eval dest_uri = host.":".destPort | eval connection = source_uri."->".dest_uri | stats values(fwdType) as fwdType, values(sourceIp) as sourceIp, latest(version) as version, values(os) as os, values(arch) as arch, dc(dest_uri) as dest_count, dc(connection) as connection_count, avg(tcp_KBps) as avg_tcp_kbps, avg(tcp_eps) as avg_tcp_eps by hostname, guid | eval avg_tcp_kbps = round(avg_tcp_kbps, 2) | eval avg_tcp_eps = round(avg_tcp_eps, 2) | `dmc_rename_forwarder_type(fwdType)` | rename hostname as Instance, fwdType as "Forwarder Type", sourceIp as IP, version as "Splunk Version", os as OS, arch as Architecture, guid as GUID, dest_count as "Receiver Count", connection_count as "Connection Count", avg_tcp_kbps as "Average KB/s", avg_tcp_eps as "Average Events/s"   I really need this information for each forwarder as from the query. The issue I see is that it searches dmc_get_forwarder_tcpin which is equal to index=_internal sourcetype=splunkd group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* and I cannot find the indexes there. How can i connect it to each index?