All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am new in this field, is it possible to explain the solution step by step?
  here  
Hi @tuts , as I said, the Threat Security Domain is in the name of the Correlation Search. Clone your CS and change the Security Domain. You'll have a new CS with the correct name. Ciao. Giuseppe
Hello all, I've run into a problem with the backfill upon creating (also tried cloning) a KPI in regards to Splunk License Metrics using the following search:   index=_internal source=*license_usag... See more...
Hello all, I've run into a problem with the backfill upon creating (also tried cloning) a KPI in regards to Splunk License Metrics using the following search:   index=_internal source=*license_usage.log type="Usage" | fields idx, b | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin span=5min _time | stats sum(b) as b by indexname, _time | eval GB=round(b/1024/1024/1024, 3) | fields _time, indexname, GB The Use Case:  I want a KPI for the License Usage with the separate Indexes as Entities.  Configuration info:   Seeing as I want the License Info on an per Index-Basis I konfigured the KPI to be split into Entities by the field "indexname". As for the Frequency and Calculation I selected:   Calculating Maximum of GB per entity as entity value,  Sum of entity value as aggregate over the last 5 minute(s) every 5 minute(s). Fill gaps in data with Null values and use a unknown threshold level for them. So far so good... now I also configured a Backfill for the last 30 days (taxing on the system but it should manage). The Problem: Upon seeing the Message that the backfill was completed, I checked the itsi_summary Index and found the backfill data of the KPI  but with regular gaps. More precisely, for each day it had backfilled the data from the activation time of the kpi (here 12:30) for about 6h (18:25/18:30) and then there were no further values for the day until the next day around 12:30. Even though there is license usage during the gap times and also available in the license_usage.log used by the KPI search.  The Data since activation is continuous and has no gaps.  I tried cloning the KPI, remaking the KPI with both adhoc or base search, but all featured the same curious results (just with different starting points as the activation time of the KPI was different). Thus now I am wondering if there is some sort of limit for backfilling or if perhaps someone has an idea what caused this strange backfill behaviour? (Also there was no error message in the _internal index as far as I could tell.)  Help and ideas would be appreciated. Thanks in advance. 
That's a slightly complicated setup. Unfortunately the UFs can send data "anywhere". You can try to fight it to some extent but in general with s2s you have the metadata field specified on the sendin... See more...
That's a slightly complicated setup. Unfortunately the UFs can send data "anywhere". You can try to fight it to some extent but in general with s2s you have the metadata field specified on the sending end and you don't have any network-level metadata. To some extent you could mitigate it with sending from UF with s2s over http (using httpout) and enabling indexes validation with s2s_indexes_validation option for specific HEC tokens. (but it works only with sufficiently recent Splunk versions). As for the syslog data, I'd suggest doing the filtering and rerouting on the SC4S layer.
That's an interesting problem but I think it's a bit of a malformed data. If your field values contain commas, they should be enclosed in quotes. If your column order is constant, you can define a r... See more...
That's an interesting problem but I think it's a bit of a malformed data. If your field values contain commas, they should be enclosed in quotes. If your column order is constant, you can define a regex-based search-time extraction including escaped commas in field value.
Logs are landing directly from UF to indexers
Are you sending directly from your UF to indexer(s)? Or do you have a HF somewhere in the middle?
"NT service\splunkforwarder" does not have native permission levels to read from all the windows log channels, especially the SECURITY log channel and SYSMON channels. The easiest option is to add "N... See more...
"NT service\splunkforwarder" does not have native permission levels to read from all the windows log channels, especially the SECURITY log channel and SYSMON channels. The easiest option is to add "NT Service\SplunkForwarder" object to the "Event Log Readers" group in the system.  Or create a domain user, restart all the instances of SplunkForwarder service with the newly created domain user and come up with a GPO to add the domain user to "Event Log Readers". 
 This is the search, but whatever you choose from a domain, it categorizes it as a threat  
Yes, we have TA_windows installed. I've checked this add-on for hostname/host field in inputs.conf, but this field does not exist
That is indeed strange. Do you have TA_windows installed on your receiving end?
@catherinelam "warm standby" is the architecture and Primary / Secondary is the server role. One is only active at any one time. 
OK, so it's not about "converting" as much as simply dividing by 100. When you divide by 100 you can use round() to strip the decimal part. And if you want to display it as a string ending with a pe... See more...
OK, so it's not about "converting" as much as simply dividing by 100. When you divide by 100 you can use round() to strip the decimal part. And if you want to display it as a string ending with a percent sign, use fieldformat so that the underlying value doesn't get rendered into a string (that would break reasonable sorting). You might even leave the original data as is and only divide within fieldformat. Like this: | fieldformat whatever=tostring(whatever/100)."%" One caveat in your case might be that you're using timechart to prepare your data and your field name(s) will vary depending on your input data and I'm not 100% sure that fieldformat will work with foreach.
In the newly ingested events, the old hostname is used in the host field, the new hostname is shown in the ComputerName field
Are you talking about the old events or the newly ingested ones?
Hi, I'm facing an issue with 5 hosts, recently we change the hostname of these machines but it is not reflected in the host field, in the host field the old hostname is shown. Below is a sample log... See more...
Hi, I'm facing an issue with 5 hosts, recently we change the hostname of these machines but it is not reflected in the host field, in the host field the old hostname is shown. Below is a sample log: "LogName=Security EventCode=4673 EventType=0 ComputerName=A0310PMTHYCJH15.tnjhs.com.pk host = A0310PMNIAMT05    source = WinEventLog:Security     sourcetype = WinEventLog " We are receiving logs from these windows hosts through UF and I checked the apps deployed in these hosts and checked the inputs.conf, hostname field is not defined. The new hostname is shown in the logs in the field ComputerName. Any suggestions to this problem would be appreciated.
Thank you very much for your reply. In fact, the returned result is indeed a percentage, and the returned data comes from Siemens PLC. '2398' is actually 23.98%, so I want to convert the result to 2 ... See more...
Thank you very much for your reply. In fact, the returned result is indeed a percentage, and the returned data comes from Siemens PLC. '2398' is actually 23.98%, so I want to convert the result to 2 decimal places and add a percentage sign to the converted decimal.
  I really don't know what to do, all I want is to adopt the security domains that I want   
 Welcome to you engineer I did not understand where to go can you explain to me more I am new to splunk and about two months I am looking for a solution to the problem