All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @hazardoom, at first 340 indexes are very many, so I hint to redesign your indexes structure to reduce them. Anyway, if you want to know the hosts in each index, youcan run something like this: ... See more...
Hi @hazardoom, at first 340 indexes are very many, so I hint to redesign your indexes structure to reduce them. Anyway, if you want to know the hosts in each index, youcan run something like this: | tstats count values(host) AS host WHERE index=* BY index If instead you want the heavy Forwarders ,it's more difficoult because , for now, the passing through HFs aren't recorded in the events, I asked to Splunk Ideas to have this feature and it's under development. Anyway, for the moment, you should create a field at index time in each HF and use it in the search, but it's very long to describe, see at https://docs.splunk.com/Documentation/Splunk/9.2.0/Data/Configureindex-timefieldextraction Ciao. Giuseppe
@ITWhisperer , This above query is based on   https://www.splunk.com/en_us/blog/security/active-directory-lateral-movement-detection-threat-research-release-november-2021.html if possible pls help... See more...
@ITWhisperer , This above query is based on   https://www.splunk.com/en_us/blog/security/active-directory-lateral-movement-detection-threat-research-release-november-2021.html if possible pls help me in making a query as per the sample event. thanks
Hi, We have around 340 indexes and I need to know which universal/heavy forwarder forwards data to which exact index. How can I do that?  Thanks,
Hi @Guido2000, at first, please, next time add also your search in text mode otherwise is more difficoult to answer you. Anyway, let me understand: do you want to trigger an alert when yu have more... See more...
Hi @Guido2000, at first, please, next time add also your search in text mode otherwise is more difficoult to answer you. Anyway, let me understand: do you want to trigger an alert when yu have more than 600 results avery minute, is it true? In these cases I prefer to have the threshold inside the search (more a greater readability): index="fabrication-gear-index" sourcetype="Vehicle-logs" source="ud06148" CAN_ID="44c" | timechart span=1m count | where count>600 then don't use the minus char (-) in the index or sourcetype name because Splunk identifies this chare as the minus, use underscore (_) and you need to use quotes. Ciao. Giuseppe
Hi @anandhalagaras1, did you tried SHOULD_LINEMERGE = false? Ciao. Giuseppe
Hi @Nawab, in the panl's optins add  <option name="link.openSearch.visible">false</option> Ciao. Giuseppe
Hi @kannu, there isn't a pre-defined way to associate an Ad-On to a Data Model. You should see the tags (defined in tags.conf), and map them to the Data Models Constraints that you can find in the ... See more...
Hi @kannu, there isn't a pre-defined way to associate an Ad-On to a Data Model. You should see the tags (defined in tags.conf), and map them to the Data Models Constraints that you can find in the pages of these URL: https://docs.splunk.com/Documentation/CIM/5.3.1/User/Howtousethesereferencetables . Some Add-Ons could also be associated to more than one Data Model. Ciao. Giuseppe
It sounds like you don't have connectivity / network routing from where you are running your code to where you are trying to connect to, or that the service is not available on that port. Have you tr... See more...
It sounds like you don't have connectivity / network routing from where you are running your code to where you are trying to connect to, or that the service is not available on that port. Have you tried telnet'ing to the host and port?
Try something like this index=* sourcetype=transaction OR sourcetype=users | eval CommonNumber=if(sourcetype="transaction", USERNUMBER, NUMBER) | eventstats values(NAME) as Employee by CommonNumber ... See more...
Try something like this index=* sourcetype=transaction OR sourcetype=users | eval CommonNumber=if(sourcetype="transaction", USERNUMBER, NUMBER) | eventstats values(NAME) as Employee by CommonNumber | stats dc(PARENT_ACCOUNT) as transactionMade values(Employee) as Employee values(USERNUMBER) as USERNUMBER by POSTDATE, CommonNumber | table CommonNumber USERNUMBER Employee PARENT_ACCOUNT POSTDATE transactionMade
I found the solution which I came across here: https://community.splunk.com/t5/Security/How-do-I-renew-an-expired-Splunk-Certificate/m-p/389701 Turns out, the Splunk certificate was expired. This is... See more...
I found the solution which I came across here: https://community.splunk.com/t5/Security/How-do-I-renew-an-expired-Splunk-Certificate/m-p/389701 Turns out, the Splunk certificate was expired. This is how I checked: $ openssl x509 -enddate -noout -in /opt/splunk/etc/auth/server.pem notAfter=Feb 27 13:56:21 2024 GMT To get a new certificate, I removed the old certificate and restarted Splunk (a new certificate will be created when Splunk starts): $ mv /opt/splunk/etc/auth/server.pem /opt/splunk/etc/auth/server.pem.backup Now Settings > Tokens is working again.  
I am trying to trigger the splunk search query from Java but getting connection time out , below is stack trace:   RROR 2024-03-05 15:17:06,830 [http-nio-9091-exec-2] traceID= app=NONE ver=0.0 geo=... See more...
I am trying to trigger the splunk search query from Java but getting connection time out , below is stack trace:   RROR 2024-03-05 15:17:06,830 [http-nio-9091-exec-2] traceID= app=NONE ver=0.0 geo=eu businessGeo= serviceGroupId=NONE env=local cl=com.nike.backstopper.handler.spring.SpringUnhandledExceptionHandler messageId= messageType= messageSourceId= : Caught unhandled exception: error_uid=e4efc159-ad38-4ab5-8bf1-4e277c49448b, dtrace_id=null, exception_class=java.lang.RuntimeException, returned_http_status_code=500, contributing_errors="GENERIC_SERVICE_ERROR", request_uri="/node/intgpltfm/messagetypes/v1/hello", request_method="GET", query_string="null", request_headers="authorization=Bearer  token value,postman-token=a37269d8-fb3a-498b-85f6-acb1611f84c0,host=localhost:9091,connection=keep-alive,accept-encoding=gzip, deflate, br,user-agent=PostmanRuntime/7.36.3,accept=*/*", unhandled_error="true" java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.connect0(Native Method) ~[?:?] at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:101) ~[?:?] at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?] at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?] at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?] at java.net.Socket.connect(Socket.java:608) ~[?:?] at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:302) ~[?:?] at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) ~[?:?] at sun.net.NetworkClient.doConnect(NetworkClient.java:182) ~[?:?] at sun.net.www.http.HttpClient.openServer(HttpClient.java:510) ~[?:?] at sun.net.www.http.HttpClient.openServer(HttpClient.java:605) ~[?:?] at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265) ~[?:?] at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372) ~[?:?] at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:207) ~[?:?] at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187) ~[?:?] at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081) ~[?:?] at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:193) ~[?:?] at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:168) ~[?:?] at com.splunk.HttpService.send(HttpService.java:380) ~[splunk-1.4.0.0.jar:1.4.0] ... 80 more Wrapped by: java.lang.RuntimeException: Connection timed out: connect at com.splunk.HttpService.send(HttpService.java:382) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.Service.send(Service.java:1280) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.HttpService.get(HttpService.java:163) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.Service.export(Service.java:220) ~[splunk-1.4.0.0.jar:1.4.0] at com.splunk.Service.export(Service.java:235) ~[splunk-1.4.0.0.jar:1.4.0] at com.nike.na.node.intg.status.service.SplunkService.getFileList(SplunkService.java:87) ~[main/:?] at com.nike.na.node.intg.status.controller.MessageTypeController.getHelloWorldMessage(MessageTypeController.java:141) ~[main/:?] at com.nike.na.node.intg.status.controller.MessageTypeController$$FastClassBySpringCGLIB$$6afd90e.invoke(<generated>) ~[main/:?] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.3.27.jar:5.3.27] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793) ~[spring-aop-5.3.27.jar:5.3.27]         I tried using post man as well there as well receiving the same error Your assistance will be greatly appreciated! Thanks.
please help me with this use case scenario
I am new to splunk. How do we write a splunk query for a support ticket that is "In Progress" status to calculate the business hours elapsed by the ticket. We need to exclude the non-business hours o... See more...
I am new to splunk. How do we write a splunk query for a support ticket that is "In Progress" status to calculate the business hours elapsed by the ticket. We need to exclude the non-business hours of the weekday when the incident is "In Progress" status and also exclude the holidays and weekends.
Hi everyone. I have the following issue using Splunk Enterprise (v. 9.2.0).   I developed a script to send a CSV dataset to Splunk using a data input (I know it's possible to upload CSV directly, b... See more...
Hi everyone. I have the following issue using Splunk Enterprise (v. 9.2.0).   I developed a script to send a CSV dataset to Splunk using a data input (I know it's possible to upload CSV directly, but I have specific requirements). Then, I defined a Real-Time alert having the following settings: That is, "trigger an alert everytime, during a minute, the provided query returns at least 1 result" (in the actual situation the threshold will be 600 and not 1, but this is a test).   When I enable the alert and start sending data, I see this window upadting in real time: But no alert is triggered, why?
This below mentioned lines are coming as a single event and not as separate events. So we want to get them splitted i.e.. It starts with IP and the end would be with Email field so after which it nee... See more...
This below mentioned lines are coming as a single event and not as separate events. So we want to get them splitted i.e.. It starts with IP and the end would be with Email field so after which it needs to be a separate next  event. IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 15:10:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/98765_3598/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 17:12:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/1234_9564/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 18:10:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/9821_365/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com IP:aa.bbb.ccc.ddd##Browser:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0##LoginSuccess Wire At:04-03-24 20:10:32##CookieFilePath:/xxx/yyy/abc.com/xyz/abc/forms/submitform/live/12345/222_123/clear.txt##ABC:12344564##Sessionid:xyz-a1-ddd_1##Form:xyz##Type:Live##LoginSuccess:Yes##SessionUserId:123##Email:xyz@google.com SO kindly let me know how can be get them splitted into separate events.
can you let me know how
Hello All , Just wanted to know is there any way , in which we can identify that available CIM compliance add on on Splunk base normalizes to which data model of CIM Splunk , One way i know is to... See more...
Hello All , Just wanted to know is there any way , in which we can identify that available CIM compliance add on on Splunk base normalizes to which data model of CIM Splunk , One way i know is to check tags .conf and eventype.conf , where they mentioned the data model name in form of tag , but if tags.conf and  eventype.conf is not there then how to identify which data model is being used in addon . If anybody has also faced the same issue , like me , or knows how to deal with it , please let me know .  
Hi @Nawab, disable the Open in Search feature. Ciao. Giuseppe
yeah, still he can click on search icon and modify the search in search app
I have a lookup which has fields like account_name, account_owner, environment etc. this lookup has more than 1000+ data. I created one macro under which write a search query below: search [| inputl... See more...
I have a lookup which has fields like account_name, account_owner, environment etc. this lookup has more than 1000+ data. I created one macro under which write a search query below: search [| inputlookup Account_Owners.csv |rename "Account ID" as aws_account_id |search Environment IN (PROD, UAT, ) |table account_id]   After that whenever, I am calling this macros with an index it's not fetching whole log except very accounts. But when I'm passing the lookup query directly into search with same index it's populating every logs