All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, How to use specific start date in weekly timechart? For example: I have a set of Grade (Math, English, Science) data for Student1 and Student2 from 2/8/2024  to 3/1/2024 When I use time... See more...
Hello, How to use specific start date in weekly timechart? For example: I have a set of Grade (Math, English, Science) data for Student1 and Student2 from 2/8/2024  to 3/1/2024 When I use timechart weekly, it always starts with 02/08/2024. | timechart span=1w first(MathGrade) by Student useother=f limit=0 How do I start from other date, such as 02/09/2024 or 02/10/2024? Thank you for your help Here's the search   | makeresults format=csv data="_time,Student,MathGrade,EnglishGrade,ScienceGrade 1707368400,Student1,10,10,10 1707454800,Student1,9,9,9 1707541200,Student1,8,8,8 1707627600,Student1,7,7,7 1707714000,Student1,6,6,6 1707800400,Student1,5,5,5 1707886800,Student1,6,6,6 1707973200,Student1,7,7,7 1708059600,Student1,8,8,8 1708146000,Student1,9,9,9 1708232400,Student1,10,10,10 1708318800,Student1,10,10,10 1708405200,Student1,9,9,9 1708491600,Student1,8,8,8 1708578000,Student1,7,7,7 1708664400,Student1,6,6,6 1708750800,Student1,5,5,5 1708837200,Student1,6,6,6 1708923600,Student1,7,7,7 1709010000,Student1,8,8,8 1709096400,Student1,9,9,9 1709182800,Student1,10,10,10 1709269200,Student1,10,10,10 1707368400,Student2,9,9,9 1707454800,Student2,5,5,5 1707541200,Student2,6,6,6 1707627600,Student2,7,7,7 1707714000,Student2,8,8,8 1707800400,Student2,9,9,9 1707886800,Student2,5,5,5 1707973200,Student2,6,6,6 1708059600,Student2,7,7,7 1708146000,Student2,8,8,8 1708232400,Student2,9,9,9 1708318800,Student2,9,9,9 1708405200,Student2,5,5,5 1708491600,Student2,6,6,6 1708578000,Student2,7,7,7 1708664400,Student2,8,8,8 1708750800,Student2,9,9,9 1708837200,Student2,5,5,5 1708923600,Student2,6,6,6 1709010000,Student2,7,7,7 1709096400,Student2,8,8,8 1709182800,Student2,9,9,9 1709269200,Student2,9,9,9" | table _time, Student, MathGrade, EnglishGrade, ScienceGrade | timechart span=1w first(MathGrade) by Student useother=f limit=0      
Here is my current rex command -      EventCode=1004 | rex field=_raw "Files: (?<Media_Source>.+?\.txt)" | table Media_Source       My source data looks like this -     Files: C:\ProgramDa... See more...
Here is my current rex command -      EventCode=1004 | rex field=_raw "Files: (?<Media_Source>.+?\.txt)" | table Media_Source       My source data looks like this -     Files: C:\ProgramData\Roxio Log Files\Test.test_user_20240305122549.txt SHA1: 73b710056457bd9bda5fee22bb2a2ada8aa9f3e0       My current rex result is -  C:\ProgramData\Roxio Log Files\Test.test_user_20240305122549.txt How do I make it - Test.test_user_20240305122549.txt Im trying to drop - C:\ProgramData\Roxio Log Files\
Hi, Been trying to connect/join two log sources which have fields that share the same values. To break it down: source_1 field_A, field_D, and field_E source_2 field_B, and field_C f... See more...
Hi, Been trying to connect/join two log sources which have fields that share the same values. To break it down: source_1 field_A, field_D, and field_E source_2 field_B, and field_C field_a and field_b can share same value. field_c can correspond to multiple values of field_A/field_B. The query should essentially add field_c from source_2 to every filtered event in source_1 (like a left join, with source_2 almost functioning as a lookup table). I've gotten pretty close with my Join query, but it's a bit slow and not populating all the field_c's. Inspecting the job reveals I'm hitting 50000 result limit. I've also tried a stew query using stats, which is much faster, but it's not actually connecting the events / data together. Here are the queries I've been using so far: join   index=index_1 sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up* | rename field_A as field_B | join type=left max=0 field_B [ search source="source_2" earliest=-30d@d latest=@m] | table field_D field_E field_B field_C   stats w/ coalesce()   index=index_1 (sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up*) OR (source="source_2" earliest=-30d@d latest=@m) | eval field_AB=coalesce(field_A, field_B) | fields field_D field_E field_AB field_C | stats values(*) as * by field_AB     expected output field_D field_E field_A/field_B field_C fun_text Up/Down_text shared_value corresponding_value  
I have old searchheads that were removed via "splunk remove shcluster-member" command.  They rightfully do not show when I run "splunk show shcluster-status", however when I run  "splunk show kvstore... See more...
I have old searchheads that were removed via "splunk remove shcluster-member" command.  They rightfully do not show when I run "splunk show shcluster-status", however when I run  "splunk show kvstore-status" all the removed searchheads still show in this listing.  How do I get them removed from the kvstore clustering as well?
Hi When i have'd install the forwarder splunk in my host, he didn't ask me for the administrator username and password so when i start splunk and connect it to my splunk entreprise i can't enter Ide... See more...
Hi When i have'd install the forwarder splunk in my host, he didn't ask me for the administrator username and password so when i start splunk and connect it to my splunk entreprise i can't enter Identifier. And default password don't work. Thanks for your help.
Hello, We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB ... See more...
Hello, We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB (3GB) on servers where the /var/www file path contains some content. [monitor:///var/www/.../storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0   These logs are not plentiful or especially active, so I'm confused why the large spike in memory usage.  There would only be a handful of logs and they'd be updated infrequently yet the memory spike happens anyway.  I've tried to be as specific with the filepath as I can (I still need the wildcard directory path) but that doesn't seem to bring any better performance. There may be a lot of files in that path but only a handful that actually match the monitor stanza criteria.  Any suggestions on what can be done?  Thanks in advance. 
  Hello, I need help with perfecting a sourcetype that doesn't index my json files correctly when I am defining multiple capture groups within the LINE_BREAKER parameter. I'm using this other quest... See more...
  Hello, I need help with perfecting a sourcetype that doesn't index my json files correctly when I am defining multiple capture groups within the LINE_BREAKER parameter. I'm using this other questionto try to figure out how to make it work: https://community.splunk.com/t5/Getting-Data-In/How-to-handle-LINE-BREAKER-regex-for-multiple-capture-groups/m-p/291996  In my case my json looks like this [{"Field 1": "Value 1", "Field N": "Value N"}, {"Field 1": "Value 1", "Field N": "Value N"}, {"Field 1": "Value 1", "Field N": "Value N"}] Initially I tried: LINE_BREAKER = }(,\s){ Which split the events with the exception of the first and last records which were not indexed correctly due to the "[" or "]" characters leading and trailing the payload. After many attempts I have been unable to make it work, but based on what I've read this seems to be the most intuitive solution for defining the capture groups: LINE_BREAKER = ^([){|}(,\s){|}(])$ It doesn't work, but rather indexes the entire payload as one event, formatted correctly, but unusable. Could somebody please suggest how to correctly define the LINE_BREAKER parameter for the sourcetype?  Here is the full version I'm using: [area:prd:json] SHOULD_LINEMERGE = false TRUNCATE = 8388608 TIME_PREFIX = \"Updated\sdate\"\:\s\" TIME_FORMAT = %Y-%m-%d %H:%M:%S TZ = Europe/Paris MAX_TIMESTAMP_LOOKAHEAD = -1 KV_MODE = json LINE_BREAKER = ^([){|}(,\s){|}(])$ Other resolutions to my problem are welcome as well! Best regards, Andrew
I am trying to make a curl request to a direct json link and fetch the result. When i hardcode the URL it works fine but my url is dynamic and gets created based on the search result of another query... See more...
I am trying to make a curl request to a direct json link and fetch the result. When i hardcode the URL it works fine but my url is dynamic and gets created based on the search result of another query. I can see my curl command is correct but it doesnt give proper output
Folks, I'm new to Splunk but learning. However I've been stuck and i need help with a simple query and Dash board i think.  1. Im able to create a simple xml query with a dashboard that list a numbe... See more...
Folks, I'm new to Splunk but learning. However I've been stuck and i need help with a simple query and Dash board i think.  1. Im able to create a simple xml query with a dashboard that list a number of users doing what from an indexed log file. Works fine. Example server.log  and query sample index=Test* "`Users`"  2. I have one dataset csv file that contain server names and cluster that uploaded into my space.   Now how do i combine the query and create a dashboard from my dataset file and server log.  That will include the User info in the indexed server logs and include the server and cluster info in the dataset csv file.  Please advice  
HI, I need to know how to set and where the value of allow_skew for the Enterprise Security app, as I have many alerts triggering every 5 minutes. thank you.
Hi Splunker ~~ I try to set up a markdown in text , and use the gui to modify the color or change the layer up or down.  but, no effect  ,only change the json content , then works. the interesting... See more...
Hi Splunker ~~ I try to set up a markdown in text , and use the gui to modify the color or change the layer up or down.  but, no effect  ,only change the json content , then works. the interesting thing , Same version on the other server , it's works.  any suggestion ??   any expert can help  
Hi Team, While running the query I'm able see this error. but how to overcome this I have tried with spath command, but it does not work. I have attached screen shot for the same. Please could you... See more...
Hi Team, While running the query I'm able see this error. but how to overcome this I have tried with spath command, but it does not work. I have attached screen shot for the same. Please could you help on this asap.   Thanks Advance 
Hi, We are monitoring whole file in index. As file is in huge in size. which indexed all the content of files. But we require only specific part of files to be indexed. SAMPLE DATA: {"quiz": { "s... See more...
Hi, We are monitoring whole file in index. As file is in huge in size. which indexed all the content of files. But we require only specific part of files to be indexed. SAMPLE DATA: {"quiz": { "sport": { "q1": { "question": "Which one is correct team name in NBA?", "options": [ "New York Bulls", "Los Angeles Kings", "Golden State Warriros", "Huston Rocket" ], "answer": "Huston Rocket" } }, "maths": { "q1": { "question": "5 + 7 = ?", "options": [ "10", "11", "12", "13" ], "answer": "12" }, "q2": { "question": "12 - 8 = ?", "options": [ "1", "2", "3", "4" ], "answer": "4" } } } }   Sample SPL:   index="test" "answer"|<further spl> How to indexed partial data of file for answer string, Not to be indexed whole file. Thank you in advance for your help! 
Hey, im trying to do something relative easy and for some reason can't make it.. i have a lookup named tableq_lookyp with only one column tableq with the values: 1,2,4,5,7,8,10,11,12,13,14,15,16,20... See more...
Hey, im trying to do something relative easy and for some reason can't make it.. i have a lookup named tableq_lookyp with only one column tableq with the values: 1,2,4,5,7,8,10,11,12,13,14,15,16,20,21,22 (each value is different row) and i have this search: index=myidnex sourcetype=mysourcetype source=mysource | table ACCUM_CODE LOCK_CODE PERIOD_KEY TABLEQ UPD_DATE UPD_TIME USER_NAME i want to check if all of the values from the tableq lookup exists in my search  so i should get 16 rows (as the amount of different values in tableq) and a new column with yes/no options that tell me if the value appear in the search/lookup or not  what is the best way of doing it?   thanks !
Hi, We have around 340 indexes and I need to know which universal/heavy forwarder forwards data to which exact index. How can I do that?  Thanks,
I am trying to trigger the splunk search query from Java but getting connection time out , below is stack trace:   RROR 2024-03-05 15:17:06,830 [http-nio-9091-exec-2] traceID= app=NONE ver=0.0 geo=... See more...
I am trying to trigger the splunk search query from Java but getting connection time out , below is stack trace:   RROR 2024-03-05 15:17:06,830 [http-nio-9091-exec-2] traceID= app=NONE ver=0.0 geo=eu businessGeo= serviceGroupId=NONE env=local cl=com.nike.backstopper.handler.spring.SpringUnhandledExceptionHandler messageId= messageType= messageSourceId= : Caught unhandled exception: error_uid=e4efc159-ad38-4ab5-8bf1-4e277c49448b, dtrace_id=null, exception_class=java.lang.RuntimeException, returned_http_status_code=500, contributing_errors="GENERIC_SERVICE_ERROR", request_uri="/node/intgpltfm/messagetypes/v1/hello", request_method="GET", query_string="null", request_headers="authorization=Bearer  token value,postman-token=a37269d8-fb3a-498b-85f6-acb1611f84c0,host=localhost:9091,connection=keep-alive,accept-encoding=gzip, deflate, br,user-agent=PostmanRuntime/7.36.3,accept=*/*", unhandled_error="true" java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.connect0(Native Method) ~[?:?] at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:101) ~[?:?] at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?] at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?] at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?] at java.net.Socket.connect(Socket.java:608) ~[?:?] at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:302) ~[?:?] at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) ~[?:?] at sun.net.NetworkClient.doConnect(NetworkClient.java:182) ~[?:?] at sun.net.www.http.HttpClient.openServer(HttpClient.java:510) ~[?:?] at sun.net.www.http.HttpClient.openServer(HttpClient.java:605) ~[?:?] at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265) ~[?:?] at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372) ~[?:?] at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:207) ~[?:?] at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187) ~[?:?] at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081) ~[?:?] at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:193) ~[?:?] at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:168) ~[?:?] at com.splunk.HttpService.send(HttpService.java:380) ~[splunk-1.4.0.0.jar:1.4.0] ... 80 more Wrapped by: java.lang.RuntimeException: Connection timed out: connect at com.splunk.HttpService.send(HttpService.java:382) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.Service.send(Service.java:1280) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.HttpService.get(HttpService.java:163) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.Service.export(Service.java:220) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.Service.export(Service.java:235) ~[splunk-1.4.0.0.jar:1.4.0] at com.nike.na.node.intg.status.service.SplunkService.getFileList(SplunkService.java:87) ~[main/:?] at com.nike.na.node.intg.status.controller.MessageTypeController.getHelloWorldMessage(MessageTypeController.java:141) ~[main/:?] at com.nike.na.node.intg.status.controller.MessageTypeController$$FastClassBySpringCGLIB$$6afd90e.invoke(<generated>) ~[main/:?] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.3.27.jar:5.3.27] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793) ~[spring-aop-5.3.27.jar:5.3.27]         I tried using post man as well there as well receiving the same error Your assistance will be greatly appreciated! Thanks.
I am new to splunk. How do we write a splunk query for a support ticket that is "In Progress" status to calculate the business hours elapsed by the ticket. We need to exclude the non-business hours o... See more...
I am new to splunk. How do we write a splunk query for a support ticket that is "In Progress" status to calculate the business hours elapsed by the ticket. We need to exclude the non-business hours of the weekday when the incident is "In Progress" status and also exclude the holidays and weekends.
Hi everyone. I have the following issue using Splunk Enterprise (v. 9.2.0).   I developed a script to send a CSV dataset to Splunk using a data input (I know it's possible to upload CSV directly, b... See more...
Hi everyone. I have the following issue using Splunk Enterprise (v. 9.2.0).   I developed a script to send a CSV dataset to Splunk using a data input (I know it's possible to upload CSV directly, but I have specific requirements). Then, I defined a Real-Time alert having the following settings: That is, "trigger an alert everytime, during a minute, the provided query returns at least 1 result" (in the actual situation the threshold will be 600 and not 1, but this is a test).   When I enable the alert and start sending data, I see this window upadting in real time: But no alert is triggered, why?
This below mentioned lines are coming as a single event and not as separate events. So we want to get them splitted i.e.. It starts with IP and the end would be with Email field so after which it nee... See more...
This below mentioned lines are coming as a single event and not as separate events. So we want to get them splitted i.e.. It starts with IP and the end would be with Email field so after which it needs to be a separate next  event. IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 15:10:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/98765_3598/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 17:12:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/1234_9564/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 18:10:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/9821_365/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 20:10:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/222_123/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com SO kindly let me know how can be get them splitted into separate events.
Hello All , Just wanted to know is there any way , in which we can identify that available CIM compliance add on on Splunk base normalizes to which data model of CIM Splunk , One way i know is to... See more...
Hello All , Just wanted to know is there any way , in which we can identify that available CIM compliance add on on Splunk base normalizes to which data model of CIM Splunk , One way i know is to check tags .conf and eventype.conf , where they mentioned the data model name in form of tag , but if tags.conf and  eventype.conf is not there then how to identify which data model is being used in addon . If anybody has also faced the same issue , like me , or knows how to deal with it , please let me know .