All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @_pravin, the only way is to have a not masked ip address on your internal network or from outside that you can associate to a location using a lookup, otherwise there's no solution. let me know... See more...
Hi @_pravin, the only way is to have a not masked ip address on your internal network or from outside that you can associate to a location using a lookup, otherwise there's no solution. let me know of I can help you more, or, please accept on answer for the other people of Community. Ciao. Giuseppe P.S.: Karma Points are appreciated
@richgalloway  Hi, From below mentioned post they fixed this by creating a local/props.conf  https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-ISE-TA-fails-when-distributed-via-Cluster-M... See more...
@richgalloway  Hi, From below mentioned post they fixed this by creating a local/props.conf  https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-ISE-TA-fails-when-distributed-via-Cluster-Master/m-p/365170 props.conf [cisco:ise] DATETIME_CONFIG = /etc/slave-apps/Splunk_TA_cisco-ise/default/datetime_udp.xml [cisco:ise:syslog] DATETIME_CONFIG = /etc/slave-apps/Splunk_TA_cisco-ise/default/datetime_udp.xml How we can create this in the splunk cloud is it possible from all configurations ?? Thanks in advance.
Hi @gcusello ,   Thanks for answering my question. We use VPN within out organisation so the original IP is masked and also we have implemented SAML authentication recently, so it's tough to get th... See more...
Hi @gcusello ,   Thanks for answering my question. We use VPN within out organisation so the original IP is masked and also we have implemented SAML authentication recently, so it's tough to get the exact IP of the user. I have asked internally if there is some way of logging the user location.   Thanks, Pravin
| untable _time name value | where value != 0 | xyseries _time name value | fillnull value=0
i have a timechart query which is giving me the below result  i want to exclude the columns with Zero like 02gdysjska2 ,2shbhsiskdf9 Not these names can change and or not fixed  _time 003hfhdf... See more...
i have a timechart query which is giving me the below result  i want to exclude the columns with Zero like 02gdysjska2 ,2shbhsiskdf9 Not these names can change and or not fixed  _time 003hfhdfs89huk 02gdysjska2 13hdgsgtsjwk 21dhsysbaisps 2shbhsiskdf9 5hsusbsosv 2024-01-23T09:45:00.000+0000 0 0 0 0 0 0 2024-01-23T09:50:00.000+0000 0 0 0 0 0 0 2024-01-23T09:55:00.000+0000 0 0 0 17961 0 0 2024-01-23T10:00:00.000+0000 0 0 1183 0 0 0 2024-01-23T10:05:00.000+0000 0 0 0 0 0 55 2024-01-23T10:10:00.000+0000 0 0 0 0 0 0 2024-01-23T10:15:00.000+0000 0 0 0 0 0 0 2024-01-23T10:20:00.000+0000 0 0 0 0 0 0 2024-01-23T10:25:00.000+0000 4280 0 0 0 0 0 2024-01-23T10:30:00.000+0000 0 0 0 0 0 0 2024-01-23T10:35:00.000+0000 0 0 0 0 0 0
Thanks for fast response. As we are talking millions of customers that would not scale.  I'll go for a Splunk API based solution then.
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my... See more...
Hi - I get the same problem running splencore.sh - after exporting path, setting permit on cert Server is CENTOS 8 STREAM  Can this be related to error in CERT - or missing firewall opening from my Splunk HF?  [root@hostname bin]# ./splencore.sh test Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 31, in <module> from estreamer.diagnostics import Diagnostics File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 43, in <module> import estreamer.pipeline File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 29, in <module> from estreamer.metadata import View ModuleNotFoundError: No module named 'estreamer.metadata'
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter... See more...
Hi, everyone, I have an old dashboard that I want to convert to the Dashboard Studio format. However, it seems that the new Dashboard Studio does not support the use of prefix, suffix, and delimiter in the same way. Is there any way to achieve the same effect using a search query?        
One thing I have notice, is when I try to resync one of these 3 jobs in pending, in the drop down menu to choose where to resync the bucket, I only have indexers of the origin's site of the bucket, a... See more...
One thing I have notice, is when I try to resync one of these 3 jobs in pending, in the drop down menu to choose where to resync the bucket, I only have indexers of the origin's site of the bucket, and only one indexer from the other site. So I can't force réplication to another indexer in the other site... The "View Becket Details" in the details of the task in pending give me for the bucket this info : ---- Replication count by site : site 1:1 site 2:7 Search count by site site 1:1 --- Is there a way to force replication on a desired indexer on the other side ? (because Splunk list me only one and always the same indexer on the other site) Thanks !
Hi @ashidhingra, yes, because after a stats command you have only the fields in the stats, you shuld try something like this: <your_search> earliest=-1mon latest=@mon | bucket span=1s _time | stat... See more...
Hi @ashidhingra, yes, because after a stats command you have only the fields in the stats, you shuld try something like this: <your_search> earliest=-1mon latest=@mon | bucket span=1s _time | stats count count(eval(action="success")) AS success count(eval(action="failed")) AS failed BY _time | stats max(count) AS Peak_TPS sum(success) AS success sum(failed) AS failed You cannot use timechart because in timechart you cannot have more fields Ciao. Giuseppe
Hi, | rest /services/cluster/manager/buckets | where multisite_bucket=0 AND standalone=0 From MC gives me the same error messages. Port 8089 have been tested between MC > 8089 > Indexer(s) and is... See more...
Hi, | rest /services/cluster/manager/buckets | where multisite_bucket=0 AND standalone=0 From MC gives me the same error messages. Port 8089 have been tested between MC > 8089 > Indexer(s) and is open. I think it is because the web service is off on my inderxers... But no time to dig for this actually. Priority is to get back SF / RP to normal green. One thing I have notice, is when I try to resync one of these 3 jobs in pending, in the drop down menu to choose where to resync the bucket, I only have indexers of the origin's site of the bucket, and only one indexer from the other site. So I can't force réplication to another indexer in the other site... The check of the task in pending give me for the bucket this info : ---- Replication count by site : site 1:1 site 2:7 Search count by site site 1:1 --- Is there a way to force replication on a desired indexer on the other side ? (because Splunk list me only one and always the same indexer on the other site) Thanks !
Thanks for the reply @yuanliu  Agree to disagree. If you look at the very beginning of my post I asked: "I have a challenge finding and isolating the unique hosts out of two sources" I think this... See more...
Thanks for the reply @yuanliu  Agree to disagree. If you look at the very beginning of my post I asked: "I have a challenge finding and isolating the unique hosts out of two sources" I think this is clear and SysMon and DHCP were just an example. Nothing concrete. During the communication I have reiterated this statement. Apologies if misunderstood. Thanks all for your help.
I am getting the peak stats by bucket using this  <your_search> | bucket span=1s _time | stats count by _time | timechart max(count) AS Peak_TPS span=1m Some how the two Queries are not working t... See more...
I am getting the peak stats by bucket using this  <your_search> | bucket span=1s _time | stats count by _time | timechart max(count) AS Peak_TPS span=1m Some how the two Queries are not working together 
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only... See more...
Hi, can anyone help me out on how to trigger these GUI Custom Info events into email actions by using Predefined Variables Concept. Due to dynamic behavior of POD names AppD by default is giving only Count wise alerts. Instead of POD name which went down. Do we have any templates for this type of requirement?? https://www.bing.com/ck/a?!&&p=0eb6569b2b7936e0JmltdHM9MTcwNTg4MTYwMCZpZ3VpZD0zY2VjNWZlOS1lNDUzLTZkNDctMDVjOC00YmU2ZTVjMzZjMmEmaW5zaWQ9NTI1OA&ptn=3&ver=2&hsh=3&fclid=3cec5fe9-e453-6d47-05c8-4be6e5c36c2a&psq=pod+down+alert+appdynamics&u=a1aHR0cHM6Ly9jb21tdW5pdHkuYXBwZHluYW1pY3MuY29tL3Q1L0luZnJhc3RydWN0dXJlLVNlcnZlci1OZXR3b3JrL0luZGl2aWR1YWwtUG9kLVJlc3RhcnQtYWxlcnRzL3RkLXAvNTExMTk&ntb=1  
Hi @tv00638481, Since DDAS is archive storage, Splunk Cloud keeps only compressed raw data. Compression depends on the data content but is estimated to be around %15 of the raw data. In your case, 6... See more...
Hi @tv00638481, Since DDAS is archive storage, Splunk Cloud keeps only compressed raw data. Compression depends on the data content but is estimated to be around %15 of the raw data. In your case, 60GB is normal for 400-500GB ingestion.  You can make calculation based on above information.  
Hi @mmcap, as I said, you can start from the wineventlog:security logs that contain the most information useful for security, then you could take processes, to identify if there's some rogue process... See more...
Hi @mmcap, as I said, you can start from the wineventlog:security logs that contain the most information useful for security, then you could take processes, to identify if there's some rogue process, open ports and local admins. I usually enable all the logs, eventually disabling only the performace monitoring because it's very verbose and (for this reason) expensive (in terms of license). Ciao. Giuseppe
Hi @ashidhingra, the search depends on the data you have. So supponing that the field with the traffic to monitor i "bytes" and the field with access and failed is "action" and that you want thes m... See more...
Hi @ashidhingra, the search depends on the data you have. So supponing that the field with the traffic to monitor i "bytes" and the field with access and failed is "action" and that you want thes monitoring for each host, you could try something like this, for a month: <your_search> | stats max(bytes) AS peak count(eval(action="success")) AS success count(eval(action="failed")) AS failed BY host  Ciao. Giuseppe
I have specified a specific index so that we can send the logs to it, but when I search in the search head, there are no logs found. Do I have to specify anything in the Input.conf file?
set diff does not give host that do not have SysMon as the original question specifies.  So, you want to know which sets of hosts are unique to each search, and not care that the only come from dhcp_... See more...
set diff does not give host that do not have SysMon as the original question specifies.  So, you want to know which sets of hosts are unique to each search, and not care that the only come from dhcp_source_index? (That is why I was asking very specific clarification questions, and stated clear assumptions of what my search is intended to do.) Again, set is an expensive operation.  You should be able to use stats to achieve it.  The following is equivalent to set diff: index=dhcp_source_index OR index=sysmon_index | stats values(index) as index by host | where mvcount(index) == 1 Maybe you have some requirements that you are not telling us? 
How to get peakstats and a count of success and errors for a month in one table?