All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can confirm that the problem is fixed on version 9.2.1! I upgraded to version 9.2.1 and those indexers work perfectly without additional configuration! Thanks
Thanks. It is always best to give an accurate representation of your data as it saves everyone's time. Try like this: | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - r... See more...
Thanks. It is always best to give an accurate representation of your data as it saves everyone's time. Try like this: | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-012 - user-id=test01 Target language count 1 2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-123 - user-id=test01 Target language count 1 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-123 - user-id=test01 API call is True for MyEngine 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-234 - user-id=test01 API call is True for MyEngine {\"timestamp\":\"2024-03-30T11:28:58.438Z\",\"logger_name\":\"MessageHandler\",\"thread_name\":\"threadPoolTaskExecutor-22\",\"level\":\"INFO\",\"serviceArchPath\":\"worker\",\"process\":\"NA\",\"message\":\"Marked request as succeed. {\\\"status\\\":\\\"PREDICT_SUCCESS\\\",\\\"message\\\":null}, critical Path: {\\\"requestStartTime\\\":1711797346337,\\\"operationsInCriticalPath\\\":{\\\"PREDICT:Feature:01\\\":{\\\"queue_overhead_ms\\\":59,\\\"sdk_overhead_ms\\\":25,\\\"process_time_ms\\\":432265,\\\"total_time_ms\\\":432349},\\\"PREDICT:Feature:02\\\":{\\\"queue_overhead_ms\\\":68,\\\"sdk_overhead_ms\\\":17,\\\"process_time_ms\\\":358611,\\\"total_time_ms\\\":358697},\\\"PLAT_CORE:Orchestrator\\\":{\\\"queue_overhead_ms\\\":142,\\\"process_time_ms\\\":158,\\\"total_time_ms\\\":300,\\\"hbase_overhead_in_ms\\\":136},\\\"PLAT_CORE:inference-core\\\":{\\\"total_time_ms\\\":8,\\\"hbase_overhead_in_ms\\\":5},\\\"PREDICT:Feature:03\\\":{\\\"queue_overhead_ms\\\":78,\\\"sdk_overhead_ms\\\":5,\\\"process_time_ms\\\":663,\\\"total_time_ms\\\":747}},\\\"currentOperationTimingInfo\\\":{},\\\"total_time_ms\\\":792101}\",\"x-request-id\":\"testabc-123\",\"x-service-id\":\"testID\",\"x-api-key\":\"test-client\",\"x-client-id\":\"test-client\",\"invocation_id\":\"test1\",\"x-user-id\":\"test@abc.com\",\"x-access-protected-e\":\"true\",\"trace_id\":\"testabc-0000\",\"trace_flags\":\"00\",\"span_id\":\"b1\"}" | multikv noheader=t | table _raw | rex "Marked request as (?<finalStatus>\w+).+\"x-request-id\":\"(?<reqID>[^\"]+)\"" | rex field=_raw "request_id=(?<reqID>.+?) - .+(Target language count|API call is True for MyEngine)" | rex field=_raw "Target language count (?<num_target>\d+)" | rex field=_raw "API call is (?<callTrue>True) for MyEngine" | stats first(num_target) as num_target first(callTrue) as callTrue first(finalStatus) as finalStatus by reqID | where callTrue=="True" AND isnotnull(num_target)
Hi all,   Im trying to understand how rotation certificates used for SSO works in a search head cluster. We have a searchhead cluster where we have SSO working already. As for initial setup, I unde... See more...
Hi all,   Im trying to understand how rotation certificates used for SSO works in a search head cluster. We have a searchhead cluster where we have SSO working already. As for initial setup, I understand we can download SPmetadata.xml file from splunk SAML settings page. However, during rotation, how do we create this as we are using a cert thats already existing and we want to rotate the server side certificate? If we just download SPmetadata.xml for creating request for IDP, this will have same cert as we are using. If we rotate the cert first at our side so we can download SPmetadata.xml  to create request for IDP, then this will end up in error as IDP wont detect server side certificate during this, obviously.  
Thanks! You are correct and on dummy data it is working perfectly fine.  Let me share the exact event structure with dummy data on platform index (platform-va6) having final status : snap attached... See more...
Thanks! You are correct and on dummy data it is working perfectly fine.  Let me share the exact event structure with dummy data on platform index (platform-va6) having final status : snap attached for reference. upon clicking on show as _raw text -    _raw text =========== {"timestamp":"2024-03-30T11:28:58.438Z","logger_name":"MessageHandler","thread_name":"threadPoolTaskExecutor-22","level":"INFO","serviceArchPath":"worker","process":"NA","message":"Marked request as succeed. {\"status\":\"PREDICT_SUCCESS\",\"message\":null}, critical Path: {\"requestStartTime\":1711797346337,\"operationsInCriticalPath\":{\"PREDICT:Feature:01\":{\"queue_overhead_ms\":59,\"sdk_overhead_ms\":25,\"process_time_ms\":432265,\"total_time_ms\":432349},\"PREDICT:Feature:02\":{\"queue_overhead_ms\":68,\"sdk_overhead_ms\":17,\"process_time_ms\":358611,\"total_time_ms\":358697},\"PLAT_CORE:Orchestrator\":{\"queue_overhead_ms\":142,\"process_time_ms\":158,\"total_time_ms\":300,\"hbase_overhead_in_ms\":136},\"PLAT_CORE:inference-core\":{\"total_time_ms\":8,\"hbase_overhead_in_ms\":5},\"PREDICT:Feature:03\":{\"queue_overhead_ms\":78,\"sdk_overhead_ms\":5,\"process_time_ms\":663,\"total_time_ms\":747}},\"currentOperationTimingInfo\":{},\"total_time_ms\":792101}","x-request-id":"testabc-123","x-service-id":"testID","x-api-key":"test-client","x-client-id":"test-client","invocation_id":"test1","x-user-id":"test@abc.com","x-access-protected-e":"true","trace_id":"testabc-0000","trace_flags":"00","span_id":"b1"}  
@PickleRick @marnall After further investigation I found that the tcp port 8088 is being used under another app . I removed the config from there and now all are working fine. Issued screenshot: ... See more...
@PickleRick @marnall After further investigation I found that the tcp port 8088 is being used under another app . I removed the config from there and now all are working fine. Issued screenshot: Resolved screenshot: Thanks both of your support and suggestions.
Here is a runanywhere example, using some mock-up data similar to what you posted. It show the search working. The issue may be that my dummy data is not quite representative of your data. Please exa... See more...
Here is a runanywhere example, using some mock-up data similar to what you posted. It show the search working. The issue may be that my dummy data is not quite representative of your data. Please examine it to see if any corrections can be made to the dummy data, e.g. I made the events single line whereas as your example seemed to show them as multi-line but I wasn't sure if that was important. | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-012 - user-id=test01 Target language count 1 2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-123 - user-id=test01 Target language count 1 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-123 - user-id=test01 API call is True for MyEngine 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-234 - user-id=test01 API call is True for MyEngine 29/03/2024 18:01:20.556 message: Marked request as succeed. {\"status\":\"PREDICT_SUCCESS\"} x-request-id: testabc-123" | multikv noheader=t | table _raw ``` The lines above set up some dummy data ``` | rex "Marked request as (?<finalStatus>\w+).+ x-request-id: (?<reqID>.+)" | rex field=_raw "request_id=(?<reqID>.+?) - .+(Target language count|API call is True for MyEngine)" | rex field=_raw "Target language count (?<num_target>\d+)" | rex field=_raw "API call is (?<callTrue>True) for MyEngine" | stats first(num_target) as num_target first(callTrue) as callTrue first(finalStatus) as finalStatus by reqID | where callTrue=="True" AND isnotnull(num_target)  
Another walkaround is to collect the lookup data to an index before overwriting it with another "release". Then you can do a normal search against your indexed data.
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the... See more...
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the index/host pairs for a given sourceid report indexed events, you can do it like this: Count the events you have (preferably with tstats if you can) | tstats count where <your conditions> by index host Now you want to append your table | inputlookup append=t yourlookup | fillnull count value=0 So you have to do check the overall count | stats sum(count) as count by index host sourceid This is not much different from your "single source check". But as you want to have it checked against a "multisourced" id. So do | eventstats sum(count) as combined_count by sourceid This will give you additional field containing a combined count of events across all index/host pairs for a given sourceid. So the ones you're interested in are those which didn't have any events in any of those index/host pairs | where combined_count=0  
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, ... See more...
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, we don't know how data is being processed there, how/where the logs are stored and forwarded. There are different possible approaches depending on how it all works (syslog? REST API? whatever?)
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actu... See more...
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actually you could argue that any component not being UF and not doing local indexing is a HF). So this setup (a DS doing also HF work) should work. In this setup you should have: 1) On your indexer(s) - inputs.conf creating input for s2s from your HF (that's kinda obvious) 2) On your HF/DS - inputs.conf, outputs.conf (again - obvious stuff), serverclass.conf 3) On your UF/client HF - deploymentclient.conf pointing to your HF/DS instance You also need to take into account that some things changed in 9.2. So if you upgraded to 9.2, see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based o... See more...
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based or host-based stanzas in props.conf on your indexer(s) to selectively apply your SEDCMD or transform only to specific part of your data.
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from th... See more...
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from the client 3) Verify firewall rules 4) If needed, run tcpdump/wireshark on the server and see if any traffic from the client is reaching the server at all. When you can connect to your HEC service port you can start debugging the token settings.
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Cl... See more...
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Client ID, and the Secret itself)
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support... See more...
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support. Ciao. Giuseppe
Hi @taijusoup64, use always quotes in the eval condition: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.or... See more...
Hi @taijusoup64, use always quotes in the eval condition: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h="192.168.0.1",resp_bytes,0))+(if(id.orig_h="192.168.0.1",orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) Ciao. Giuseppe
Hi @Rahul-Sri , my solution is only for a table because you transform a number in a string. if you have to display the result in a graph, you can divide by 1000000 and indicate in the subtitle that... See more...
Hi @Rahul-Sri , my solution is only for a table because you transform a number in a string. if you have to display the result in a graph, you can divide by 1000000 and indicate in the subtitle that the numbers are millions or use a logarythmic scale in the graph. Ciao. Giuseppe
Hello @splunkreal, AFAIK Yes - both the ways will update the capabilities to the respective roles as mentioned here - https://docs.splunk.com/Documentation/ES/7.3.1/Install/ConfigureUsersRoles#Add_ca... See more...
Hello @splunkreal, AFAIK Yes - both the ways will update the capabilities to the respective roles as mentioned here - https://docs.splunk.com/Documentation/ES/7.3.1/Install/ConfigureUsersRoles#Add_capabilities_to_a_role Please accept the solution and hit Karma, if this helps!
@marnall I have opened inbound port also 8088 also so I think firewall related issue also not be the concern now. 
Depending on how your server is configured, it may reject http connections. Are you able to connect to the collector health endpoint on 127.0.0.1 by connecting to the server via telnet and sending th... See more...
Depending on how your server is configured, it may reject http connections. Are you able to connect to the collector health endpoint on 127.0.0.1 by connecting to the server via telnet and sending the request to localhost?
Hello @ezmo1982 , Just checking through if the issue was resolved or you have any further questions?