All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my... See more...
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my Splunk HF?  [root@hostname bin]# ./splencore.sh test Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 31, in <module> from estreamer.diagnostics import Diagnostics File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 43, in <module> import estreamer.pipeline File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 29, in <module> from estreamer.metadata import View ModuleNotFoundError: No module named 'estreamer.metadata'
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter... See more...
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter in the same way. Is there any way to achieve the same effect using a search query?        
One thing I have notice, is when I try to resync one of these 3 jobs in pending, in the drop down menu to choose where to resync the bucket, I only have indexers of the origin's site of the bucket, a... See more...
One thing I have notice, is when I try to resync one of these 3 jobs in pending, in the drop down menu to choose where to resync the bucket, I only have indexers of the origin's site of the bucket, and only one indexer from the other site. So I can't force réplication to another indexer in the other site... The "View Becket Details" in the details of the task in pending give me for the bucket this info : ---- Replication count by site : site 1:1 site 2:7 Search count by site site 1:1 --- Is there a way to force replication on a desired indexer on the other side ? (because Splunk list me only one and always the same indexer on the other site) Thanks !
Hi @ashidhingra, yes, because after a stats command you have only the fields in the stats, you shuld try something like this: <your_search> earliest=-1mon latest=@mon | bucket span=1s _time | stat... See more...
Hi @ashidhingra, yes, because after a stats command you have only the fields in the stats, you shuld try something like this: <your_search> earliest=-1mon latest=@mon | bucket span=1s _time | stats count count(eval(action="success")) AS success count(eval(action="failed")) AS failed BY _time | stats max(count) AS Peak_TPS sum(success) AS success sum(failed) AS failed You cannot use timechart because in timechart you cannot have more fields Ciao. Giuseppe
Hi, | rest /services/cluster/manager/buckets | where multisite_bucket=0 AND standalone=0 From MC gives me the same error messages. Port 8089 have been tested between MC > 8089 > Indexer(s) and is... See more...
Hi, | rest /services/cluster/manager/buckets | where multisite_bucket=0 AND standalone=0 From MC gives me the same error messages. Port 8089 have been tested between MC > 8089 > Indexer(s) and is open. I think it is because the web service is off on my inderxers... But no time to dig for this actually. Priority is to get back SF / RP to normal green. One thing I have notice, is when I try to resync one of these 3 jobs in pending, in the drop down menu to choose where to resync the bucket, I only have indexers of the origin's site of the bucket, and only one indexer from the other site. So I can't force réplication to another indexer in the other site... The check of the task in pending give me for the bucket this info : ---- Replication count by site : site 1:1 site 2:7 Search count by site site 1:1 --- Is there a way to force replication on a desired indexer on the other side ? (because Splunk list me only one and always the same indexer on the other site) Thanks !
Thanks for the reply @yuanliu  Agree to disagree. If you look at the very beginning of my post I asked: "I have a challenge finding and isolating the unique hosts out of two sources" I think this... See more...
Thanks for the reply @yuanliu  Agree to disagree. If you look at the very beginning of my post I asked: "I have a challenge finding and isolating the unique hosts out of two sources" I think this is clear and SysMon and DHCP were just an example. Nothing concrete. During the communication I have reiterated this statement. Apologies if misunderstood. Thanks all for your help.
I am getting the peak stats by bucket using this  <your_search> | bucket span=1s _time | stats count by _time | timechart max(count) AS Peak_TPS span=1m Some how the two Queries are not working t... See more...
I am getting the peak stats by bucket using this  <your_search> | bucket span=1s _time | stats count by _time | timechart max(count) AS Peak_TPS span=1m Some how the two Queries are not working together 
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only... See more...
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only Count wise alerts. Instead of POD name which went down. Do we have any templates for this type of requirement?? https://www.bing.com/ck/a?!&&p=0eb6569b2b7936e0JmltdHM9MTcwNTg4MTYwMCZpZ3VpZD0zY2VjNWZlOS1lNDUzLTZkNDctMDVjOC00YmU2ZTVjMzZjMmEmaW5zaWQ9NTI1OA&ptn=3&ver=2&hsh=3&fclid=3cec5fe9-e453-6d47-05c8-4be6e5c36c2a&psq=pod+down+alert+appdynamics&u=a1aHR0cHM6Ly9jb21tdW5pdHkuYXBwZHluYW1pY3MuY29tL3Q1L0luZnJhc3RydWN0dXJlLVNlcnZlci1OZXR3b3JrL0luZGl2aWR1YWwtUG9kLVJlc3RhcnQtYWxlcnRzL3RkLXAvNTExMTk&ntb=1  
Hi @tv00638481, Since DDAS is archive storage, Splunk Cloud keeps only compressed raw data. Compression depends on the data content but is estimated to be around %15 of the raw data. In your case, 6... See more...
Hi @tv00638481, Since DDAS is archive storage, Splunk Cloud keeps only compressed raw data. Compression depends on the data content but is estimated to be around %15 of the raw data. In your case, 60GB is normal for 400-500GB ingestion.  You can make calculation based on above information.  
Hi @mmcap, as I said, you can start from the wineventlog:security logs that contain the most information useful for security, then you could take processes, to identify if there's some rogue process... See more...
Hi @mmcap, as I said, you can start from the wineventlog:security logs that contain the most information useful for security, then you could take processes, to identify if there's some rogue process, open ports and local admins. I usually enable all the logs, eventually disabling only the performace monitoring because it's very verbose and (for this reason) expensive (in terms of license). Ciao. Giuseppe
Hi @ashidhingra, the search depends on the data you have. So supponing that the field with the traffic to monitor i "bytes" and the field with access and failed is "action" and that you want thes m... See more...
Hi @ashidhingra, the search depends on the data you have. So supponing that the field with the traffic to monitor i "bytes" and the field with access and failed is "action" and that you want thes monitoring for each host, you could try something like this, for a month: <your_search> | stats max(bytes) AS peak count(eval(action="success")) AS success count(eval(action="failed")) AS failed BY host  Ciao. Giuseppe
I have specified a specific index so that we can send the logs to it, but when I search in the search head, there are no logs found. Do I have to specify anything in the Input.conf file?
set diff does not give host that do not have SysMon as the original question specifies.  So, you want to know which sets of hosts are unique to each search, and not care that the only come from dhcp_... See more...
set diff does not give host that do not have SysMon as the original question specifies.  So, you want to know which sets of hosts are unique to each search, and not care that the only come from dhcp_source_index? (That is why I was asking very specific clarification questions, and stated clear assumptions of what my search is intended to do.) Again, set is an expensive operation.  You should be able to use stats to achieve it.  The following is equivalent to set diff: index=dhcp_source_index OR index=sysmon_index | stats values(index) as index by host | where mvcount(index) == 1 Maybe you have some requirements that you are not telling us? 
How to get peakstats and a count of success and errors for a month in one table?
Hi All, I am new to splunk clustering environment and i have few questions when i attend interview.Any one please help me on this question 1.Can we delete index folder ?will we have permission to d... See more...
Hi All, I am new to splunk clustering environment and i have few questions when i attend interview.Any one please help me on this question 1.Can we delete index folder ?will we have permission to delete the index folder Splunk\var\lib\splunk\TestDB. 2.Can we copy the Index folder and paste it in someother index folder will be able to search the logs? 3.Where can we install DB connect app and other apps in Search head cluster OR Indexer Server cluster? 4.What is the process name when we extract logs from props and transform.conf file? 5.Upgrade Splunk cluster enironment with simple steps? 6.What is the process of search head captain will do ? Thanks, Karthigeyan R
I agreed that we should have option to move data from SmartStore back to local storage.
@MikeR a few ways to achieve this, but simplest is probably to use the Phantom Phantom app with the 'add_artifact' action. This will use phantom.collect() and if you set the container input to the id... See more...
@MikeR a few ways to achieve this, but simplest is probably to use the Phantom Phantom app with the 'add_artifact' action. This will use phantom.collect() and if you set the container input to the id of the other container it will update it with the provided artifact info provided in the action and should  return an id. 
@Carloszavala121 you can use the Timer app to generate a container with a specific label to set off any associated, active automation. You can schedule the poll to create them as often you need to. 
Thanks for your reply @yuanliu  Unfortunately, your search did not provide the results I wanted. After executing the separate searches and abstract manually the result differs from the resultant of ... See more...
Thanks for your reply @yuanliu  Unfortunately, your search did not provide the results I wanted. After executing the separate searches and abstract manually the result differs from the resultant of your search. Please do try it out. After lots of try/error I finally found the one that does the trick. It is by using 'set diff' command. I will provide my solution tomorrow for everyone to use. Regards, Dan
I have a splunk search that is returning the wrong results from a kvstore if the secondUID field is set to itself before doing the lookup. This is distilled from the actual search for simply showing ... See more...
I have a splunk search that is returning the wrong results from a kvstore if the secondUID field is set to itself before doing the lookup. This is distilled from the actual search for simply showing the bug. Both secondUID  and uID should be represented as strings.  Does anybody know why  | eval secondUID=secondUID causes the lookup command to return the wrong results? When it is commented out the correct results are returned. The results are consistently the same wrong results when they are wrong and the errors are event count dependent. So for instance, if I switch the head command on line 4 from 4000 results up to 10000 results, the lookup wrong result rate goes from 4.3% to 11.83% given the lines I am passing in for this example. If I pass in a different set of events, the results would still be wrong and consistently the same results wrong, but not necessarily the same % of wrong results compared to the other starting events.  If you either comment out that eval on line 8 or do | eval secondUID=tostring(secondUID) then the correct results are returned from the lookup command. If you switch tostring() with tonumber() the number of wrong lookups goes up.  I don't think this is intended functionality because | eval secondUID=secondUID should not be changing the results IMO, and the % of errors depend on how many events are passed through the search. More events = higher % of errors. The string comparison functions in the wheres also show nothing should be changing.  | inputlookup kvstore_560k_lines_long max=10000 | stats dc(uID) as uID by secondUID | where uID=1 | head 4000 ```keep 4000 results with the 1=1 uID to secondUID relationship established``` | eval secondUIDArchive=secondUID ```save the initial value ``` | where match(secondUIDArchive, secondUID) and like(secondUIDArchive, secondUID) ```initial value is unchanged``` | eval secondUID=secondUID ```this line causes the search to return different results compared to when commented out``` | where match(secondUIDArchive, secondUID) and like(secondUIDArchive, secondUID) ```string comparison methods show they are the same still``` | lookup kvstore_560k_lines_long secondUID output uID ```output the first UID again where there should be a 1=1 relationship``` | table uID secondUID secondUIDArchive | stats count by uID ```the final output counts of uID vary based on whether the eval on line 8 is commented out.```