All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick @marnall After further investigation I found that the tcp port 8088 is being used under another app . I removed the config from there and now all are working fine. Issued screenshot: ... See more...
@PickleRick @marnall After further investigation I found that the tcp port 8088 is being used under another app . I removed the config from there and now all are working fine. Issued screenshot: Resolved screenshot: Thanks both of your support and suggestions.
Here is a runanywhere example, using some mock-up data similar to what you posted. It show the search working. The issue may be that my dummy data is not quite representative of your data. Please exa... See more...
Here is a runanywhere example, using some mock-up data similar to what you posted. It show the search working. The issue may be that my dummy data is not quite representative of your data. Please examine it to see if any corrections can be made to the dummy data, e.g. I made the events single line whereas as your example seemed to show them as multi-line but I wasn't sure if that was important. | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-012 - user-id=test01 Target language count 1 2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-123 - user-id=test01 Target language count 1 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-123 - user-id=test01 API call is True for MyEngine 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-234 - user-id=test01 API call is True for MyEngine 29/03/2024 18:01:20.556 message: Marked request as succeed. {\"status\":\"PREDICT_SUCCESS\"} x-request-id: testabc-123" | multikv noheader=t | table _raw ``` The lines above set up some dummy data ``` | rex "Marked request as (?<finalStatus>\w+).+ x-request-id: (?<reqID>.+)" | rex field=_raw "request_id=(?<reqID>.+?) - .+(Target language count|API call is True for MyEngine)" | rex field=_raw "Target language count (?<num_target>\d+)" | rex field=_raw "API call is (?<callTrue>True) for MyEngine" | stats first(num_target) as num_target first(callTrue) as callTrue first(finalStatus) as finalStatus by reqID | where callTrue=="True" AND isnotnull(num_target)  
Another walkaround is to collect the lookup data to an index before overwriting it with another "release". Then you can do a normal search against your indexed data.
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the... See more...
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the index/host pairs for a given sourceid report indexed events, you can do it like this: Count the events you have (preferably with tstats if you can) | tstats count where <your conditions> by index host Now you want to append your table | inputlookup append=t yourlookup | fillnull count value=0 So you have to do check the overall count | stats sum(count) as count by index host sourceid This is not much different from your "single source check". But as you want to have it checked against a "multisourced" id. So do | eventstats sum(count) as combined_count by sourceid This will give you additional field containing a combined count of events across all index/host pairs for a given sourceid. So the ones you're interested in are those which didn't have any events in any of those index/host pairs | where combined_count=0  
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, ... See more...
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, we don't know how data is being processed there, how/where the logs are stored and forwarded. There are different possible approaches depending on how it all works (syslog? REST API? whatever?)
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actu... See more...
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actually you could argue that any component not being UF and not doing local indexing is a HF). So this setup (a DS doing also HF work) should work. In this setup you should have: 1) On your indexer(s) - inputs.conf creating input for s2s from your HF (that's kinda obvious) 2) On your HF/DS - inputs.conf, outputs.conf (again - obvious stuff), serverclass.conf 3) On your UF/client HF - deploymentclient.conf pointing to your HF/DS instance You also need to take into account that some things changed in 9.2. So if you upgraded to 9.2, see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based o... See more...
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based or host-based stanzas in props.conf on your indexer(s) to selectively apply your SEDCMD or transform only to specific part of your data.
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from th... See more...
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from the client 3) Verify firewall rules 4) If needed, run tcpdump/wireshark on the server and see if any traffic from the client is reaching the server at all. When you can connect to your HEC service port you can start debugging the token settings.
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Cl... See more...
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Client ID, and the Secret itself)
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support... See more...
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support. Ciao. Giuseppe
Hi @taijusoup64, use always quotes in the eval condition: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.or... See more...
Hi @taijusoup64, use always quotes in the eval condition: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h="192.168.0.1",resp_bytes,0))+(if(id.orig_h="192.168.0.1",orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) Ciao. Giuseppe
Hi @Rahul-Sri , my solution is only for a table because you transform a number in a string. if you have to display the result in a graph, you can divide by 1000000 and indicate in the subtitle that... See more...
Hi @Rahul-Sri , my solution is only for a table because you transform a number in a string. if you have to display the result in a graph, you can divide by 1000000 and indicate in the subtitle that the numbers are millions or use a logarythmic scale in the graph. Ciao. Giuseppe
Hello @splunkreal, AFAIK Yes - both the ways will update the capabilities to the respective roles as mentioned here - https://docs.splunk.com/Documentation/ES/7.3.1/Install/ConfigureUsersRoles#Add_ca... See more...
Hello @splunkreal, AFAIK Yes - both the ways will update the capabilities to the respective roles as mentioned here - https://docs.splunk.com/Documentation/ES/7.3.1/Install/ConfigureUsersRoles#Add_capabilities_to_a_role Please accept the solution and hit Karma, if this helps!
@marnall I have opened inbound port also 8088 also so I think firewall related issue also not be the concern now. 
Depending on how your server is configured, it may reject http connections. Are you able to connect to the collector health endpoint on 127.0.0.1 by connecting to the server via telnet and sending th... See more...
Depending on how your server is configured, it may reject http connections. Are you able to connect to the collector health endpoint on 127.0.0.1 by connecting to the server via telnet and sending the request to localhost?
Hello @ezmo1982 , Just checking through if the issue was resolved or you have any further questions?
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello @short_cat, I don't think it's possible. I tried with makemv as well, something like -  | makeresults | eval message = "This is line 1.\nThis is line 2.\nThis is line 3." | makemv message de... See more...
Hello @short_cat, I don't think it's possible. I tried with makemv as well, something like -  | makeresults | eval message = "This is line 1.\nThis is line 2.\nThis is line 3." | makemv message delim="\n" But it's not sending the message as expected and just considering first line as below screenshot -  I would suggest checking with the project contributors over GitHub - https://github.com/splunk/slack-alerts 
Hello @viktoriiants, How about sorting it by 'Session count' before date desc?