All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Another walkaround is to collect the lookup data to an index before overwriting it with another "release". Then you can do a normal search against your indexed data.
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the... See more...
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the index/host pairs for a given sourceid report indexed events, you can do it like this: Count the events you have (preferably with tstats if you can) | tstats count where <your conditions> by index host Now you want to append your table | inputlookup append=t yourlookup | fillnull count value=0 So you have to do check the overall count | stats sum(count) as count by index host sourceid This is not much different from your "single source check". But as you want to have it checked against a "multisourced" id. So do | eventstats sum(count) as combined_count by sourceid This will give you additional field containing a combined count of events across all index/host pairs for a given sourceid. So the ones you're interested in are those which didn't have any events in any of those index/host pairs | where combined_count=0  
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, ... See more...
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, we don't know how data is being processed there, how/where the logs are stored and forwarded. There are different possible approaches depending on how it all works (syslog? REST API? whatever?)
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actu... See more...
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actually you could argue that any component not being UF and not doing local indexing is a HF). So this setup (a DS doing also HF work) should work. In this setup you should have: 1) On your indexer(s) - inputs.conf creating input for s2s from your HF (that's kinda obvious) 2) On your HF/DS - inputs.conf, outputs.conf (again - obvious stuff), serverclass.conf 3) On your UF/client HF - deploymentclient.conf pointing to your HF/DS instance You also need to take into account that some things changed in 9.2. So if you upgraded to 9.2, see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based o... See more...
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based or host-based stanzas in props.conf on your indexer(s) to selectively apply your SEDCMD or transform only to specific part of your data.
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from th... See more...
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from the client 3) Verify firewall rules 4) If needed, run tcpdump/wireshark on the server and see if any traffic from the client is reaching the server at all. When you can connect to your HEC service port you can start debugging the token settings.
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Cl... See more...
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Client ID, and the Secret itself)
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support... See more...
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support. Ciao. Giuseppe
Hi @taijusoup64, use always quotes in the eval condition: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.or... See more...
Hi @taijusoup64, use always quotes in the eval condition: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h="192.168.0.1",resp_bytes,0))+(if(id.orig_h="192.168.0.1",orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) Ciao. Giuseppe
Hi @Rahul-Sri , my solution is only for a table because you transform a number in a string. if you have to display the result in a graph, you can divide by 1000000 and indicate in the subtitle that... See more...
Hi @Rahul-Sri , my solution is only for a table because you transform a number in a string. if you have to display the result in a graph, you can divide by 1000000 and indicate in the subtitle that the numbers are millions or use a logarythmic scale in the graph. Ciao. Giuseppe
Hello @splunkreal, AFAIK Yes - both the ways will update the capabilities to the respective roles as mentioned here - https://docs.splunk.com/Documentation/ES/7.3.1/Install/ConfigureUsersRoles#Add_ca... See more...
Hello @splunkreal, AFAIK Yes - both the ways will update the capabilities to the respective roles as mentioned here - https://docs.splunk.com/Documentation/ES/7.3.1/Install/ConfigureUsersRoles#Add_capabilities_to_a_role Please accept the solution and hit Karma, if this helps!
@marnall I have opened inbound port also 8088 also so I think firewall related issue also not be the concern now. 
Depending on how your server is configured, it may reject http connections. Are you able to connect to the collector health endpoint on 127.0.0.1 by connecting to the server via telnet and sending th... See more...
Depending on how your server is configured, it may reject http connections. Are you able to connect to the collector health endpoint on 127.0.0.1 by connecting to the server via telnet and sending the request to localhost?
Hello @ezmo1982 , Just checking through if the issue was resolved or you have any further questions?
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello @short_cat, I don't think it's possible. I tried with makemv as well, something like -  | makeresults | eval message = "This is line 1.\nThis is line 2.\nThis is line 3." | makemv message de... See more...
Hello @short_cat, I don't think it's possible. I tried with makemv as well, something like -  | makeresults | eval message = "This is line 1.\nThis is line 2.\nThis is line 3." | makemv message delim="\n" But it's not sending the message as expected and just considering first line as below screenshot -  I would suggest checking with the project contributors over GitHub - https://github.com/splunk/slack-alerts 
Hello @viktoriiants, How about sorting it by 'Session count' before date desc?
@yuanliu Thanks again for your detailed explanation. Apologies, I should have asked id_num as a follow-up question and not related to this main question.  Instead of using filldown to populate id_num... See more...
@yuanliu Thanks again for your detailed explanation. Apologies, I should have asked id_num as a follow-up question and not related to this main question.  Instead of using filldown to populate id_num, I extracted id_num and included as part of fields for every payload upload to Splunk. I have updated to the following query and it worked index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format ] | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name="test_field_name_1" | table _raw id_num Thanks again for your detailed analysis and guidance in helping solve this. 
Let's not confound different matters.  The original problem has nothing to do with id_num, filldown, or any other subject.  No other data characteristics were described.  The only information about d... See more...
Let's not confound different matters.  The original problem has nothing to do with id_num, filldown, or any other subject.  No other data characteristics were described.  The only information about data is filter ( fail_msg1 OR fail_msg2).  Let's focus on this and raise a separate question about id_num. The big question about the search is: Does this pick the correct events?   index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format]   To help you answer this, edit your sample.csv to ONLY include fail_msg1 and fail_msg2.    Use this lookup to run the search in a fixed interval, e.g., earliest=-1d@d latest=-0d@d.  Then, run the other search in the same fixed interval:   index="demo1" source="demo2" ("fail_msg1" OR "fail_msg2")   Do you get the same events?  In fact, run a third test in the same interval (as long as you run all searches within the same @Day).   index="demo1" source="demo2" [makeresults format=csv data="FailureMsg fail_msg1 fail_msg2" | rename FailureMsg AS search | format]   If you get the same events from all three, and your id_num is blank, you should look at the events themselves to find why your regex won't work.  In other words.  Because the inputlookup subsearch has no way to influence any operation after events are returned. We can discuss further if ("fail_msg1" OR "fail_msg2") gives drastically different events from the other two.  In that case, you will need to show raw events returned from each and explain what differences are between two groups of events. (Anonymize as necessary.) Here is a look at why I am suggesting these tests.  Just take the kernel of those two subsearches without index search:   | inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format   and   | makeresults format=csv data="FailureMsg fail_msg1 fail_msg2" | rename FailureMsg AS search | format   Both will give you search ( ( fail_msg1 ) OR ( fail_msg2 ) ) This is why I am confident that the subsearches are identical to ("fail_msg1" OR "fail_msg2").