All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana @yuanliu  Thank you for your help. I understand now if I want to use Student="Total", I just use the following | addcoltotals labelfield=Student label=Total | eval Score = if(S... See more...
@bowesmana @yuanliu  Thank you for your help. I understand now if I want to use Student="Total", I just use the following | addcoltotals labelfield=Student label=Total | eval Score = if(Student="Total", floor(Score) + 1, Score) if I leave the label blank, I can use | eval Score = if(isnull(Student), floor(Score) + 1, Score)
That is correct, but an indexer cluster would not solve that problem. I suggest having two HFs in an active/warm standby configuration.  There would need to be some means to copy state information f... See more...
That is correct, but an indexer cluster would not solve that problem. I suggest having two HFs in an active/warm standby configuration.  There would need to be some means to copy state information from the active HF to the standby.  That would be a non-Splunk solution.
Hi Splunkers, I have a strange situation about a some universal forwarders. On some Windows host, a colleague has installed the UF using the graphical wizards. Those forwarders must be managed with... See more...
Hi Splunkers, I have a strange situation about a some universal forwarders. On some Windows host, a colleague has installed the UF using the graphical wizards. Those forwarders must be managed with a Deployment server. He has NOT used the "customize" options; so, he has not set which logs must be sent to HF (Application, Security and so on) and a destination HF/Indexers. He has only inserted: Admin username and password Deployment server IP address and port As wrote above, he didn't inserted HF and/or Indexers; the idea is that once the UF has spoken with the Deployment server, 2 apps that contains inputs.conf and outputs.conf are downloaded and, after that, logs are sent. On Deployment server (we checked), the apps that should to be downloaded form UF have been created and contains the above 2 files. So, why I wrote "the apps that should be downloaded?" Well, due logs are not collected and sent to HF, we performed some troubleshoot and we found that apps has not been downloaded.  I mean: on host where UF is installed, if we go on $SplunkUFHOME$\etc\apps, the 2 apps are not present. So, that means that no custom inputs.conf and outputs.conf are present on UF. Only the default provided with installation are present. First thing we thought: ok, we have network issues. But it seems not: we are perfectly able, from host with UF, to ping and telnet deployment server on its port. At same time, we can access firewall that manage this traffic and we don't see, on firewall logs, any evidence of blocked/truncated connections. UF can reach DS and vice versa without issues. We tried so to manually copy folders with apps inside UF (I know, very bad things, don't blame me please...) but the situation is always the same. So, the question is: if no network issues are present, what can be the root cause about no downloaded apps?  
You should be able to utilize built in Splunk JSON commands and/or JSON functions to build out valid STIX 2.1 objects from Splunk events. Here is some sample SPL to give you an example of how to b... See more...
You should be able to utilize built in Splunk JSON commands and/or JSON functions to build out valid STIX 2.1 objects from Splunk events. Here is some sample SPL to give you an example of how to build out the json from individual fields in Splunk. | makeresults | fields - _time ``` gen properties data ``` | eval enum=split("attack-pattern|campaign", "|"), description="The type of this object, which MUST be the literal `attack-pattern`.", type="string" | tojson str(enum) str(type) str(description) output_field=properties | fields - enum, type, description ``` gen ID data ``` | eval title="id", pattern="^attack-pattern--" | tojson str(title) str(pattern) output_field=id | fields - title, pattern ``` gen Name data ``` | eval type="string", description="The name used to identify the Attack Pattern." | tojson str(type) str(description) output_field=name | fields - type, description ``` gen description data ``` | eval type="string", description="A description that provides more details and context about the Attack Pattern, potentially including its purpose and its key characteristics." | tojson str(type) str(description) output_field=description | fields - type, description ``` gen kill_chain_phases data ``` | eval "$ref"="../common/kill-chain-phase.json" | tojson str($ref) output_field=items | fields - "$ref" | eval type="array", description="The list of kill chain phases for which this attack pattern is used.", minItems=1 | tojson str(type) str(description) json(items) num(minItems) output_field=kill_chain_phases | fields - minItems, items, description, type | tojson json(properties) json(type) json(id) json(name) json(description) json(kill_chain_phases) output_field=allOf | fields - properties, type, id, name, description, kill_chain_phases | eval ref="../common/core.json" | tojson str(ref) output_field=allOf_2 | eval allOf=mvappend( 'allOf_2', 'allOf' ) | fields - allOf_2 | eval required="name", type="object", description="Attack Patterns are a type of TTP that describe ways that adversaries attempt to compromise targets. ", title="attack-pattern", "$schema"="http://json-schema.org/draft-04/schema#" | tojson str($schema) str(title) str(description) str(type) json(allOf) str(required) output_field=stix_2_payload | fields + stix_2_payload   For this example I used a generative command to put together sample data first, but if you are building from a Splunk event then the fields should all be derived from _raw or already extracted. The example is more of a demonstration on how to build a valid STIX 2.1 json object using Splunk. Below is the json object build out from the SPL above. { "$schema": "http://json-schema.org/draft-04/schema#", "allOf": [ { "ref": "../common/core.json" }, { "id": { "pattern": "^attack-pattern--", "title": "id" }, "kill_chain_phases": { "description": "The list of kill chain phases for which this attack pattern is used.", "items": { "$ref": "../common/kill-chain-phase.json" }, "minItems": 1, "type": "array" }, "name": { "description": "The name used to identify the Attack Pattern.", "type": "string" }, "properties": { "description": "The type of this object, which MUST be the literal `attack-pattern`.", "enum": [ "attack-pattern", "campaign" ], "type": "string" } } ], "description": "Attack Patterns are a type of TTP that describe ways that adversaries attempt to compromise targets. ", "required": "name", "title": "attack-pattern", "type": "object" }
Hi richgallowy, thanks for your answer and clarifications. In case i install IA on one HF and that HF goes down i'm not collecting anymore SentinalOne logs. am i right?   
The only inputs that should be enabled on an indexer are those that query the local server.  Otherwise, data duplication may results. Per the SentinelOne installation instructions, the inputs app sh... See more...
The only inputs that should be enabled on an indexer are those that query the local server.  Otherwise, data duplication may results. Per the SentinelOne installation instructions, the inputs app should be installed on a heavy forwarder.  No matter what the indexer configuration is, the IA goes on a HF.  Only the TA should be installed on the indexers. FTR, it is not necessary to use an index cluster to have high availability on ingest.  HA on ingest is provided by having forwarders distribute data across more than one indexer.  Indexer clusters protect the data by having multiple copies of it.  The extra copies offer HA at search time.
Hi all, I have set up an indexer cluster to achieve High Availability at ingestion phase. i'm aware about Update peer configuration  and i have reviewed intrusction under details tab from SentinelO... See more...
Hi all, I have set up an indexer cluster to achieve High Availability at ingestion phase. i'm aware about Update peer configuration  and i have reviewed intrusction under details tab from SentinelOne App .   I can not see an explict mention to a indexer-cluster setup. What are the steps to setup input configuration for a indexer cluster avoiding data duplication? Thanks for your help
I have the same issue.. Is there a way? I tried writing into submitted token model inside "on" call but cannot "get" the token on the higher level. On the console I can see that it was indeed writen ... See more...
I have the same issue.. Is there a way? I tried writing into submitted token model inside "on" call but cannot "get" the token on the higher level. On the console I can see that it was indeed writen into the token though.
Hi there, Logs sent to SC4S include date, time and host in the event, however when they are sent to Indexer, the date, time and host are missing. How can I get them back so the logs will look exactl... See more...
Hi there, Logs sent to SC4S include date, time and host in the event, however when they are sent to Indexer, the date, time and host are missing. How can I get them back so the logs will look exactly the same? I would like date, time and host included in the event. I appreciate any hints. thanks and regards, pawelF
How to convert splunk event to stix 2.1 json because i think to  connection to a soc center now i use splunk enterprise how can i do ? any app can convert?
Hi @dtburrows3  Thanks a lot for your support, it is working as expected. 1000 Karma point to you.
Hello, I tested my curl is now working but I always get this error with Mulesoft HEC
Hi. We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles... See more...
Hi. We are seeing weird behaviour on one of our universal forwarders. We have been sending logs from this forwarder for quite a while and this has been working properly the entire time. New logfiles are created every second hour and log lines are being appended to the newest file. Last night the universal forwarder stopped working normally. When a new file was created the forwarder sent the first line to Splunk. New lines appended later on are not being forwarded. There are no errors logged in the splunkd.log file on the forwarder, nor any error messages on the receiving index servers. Every time a new file is generated, the forwarder sends the first line to Splunk, but the appending lines seem to be ignored. As far as I can see, there has not been any changes on the forwarder, nor on the Splunk servers that might cause this defect. Is there any way to debug the parsing of the logfile on the forwarder to identify the issue? Any other ideas what can be the issue here? Thanks.
Host value in below file gets changed automatically every now and then. Can you help me write a bash script which can check the host value every 5min and if the value is different than the actual hos... See more...
Host value in below file gets changed automatically every now and then. Can you help me write a bash script which can check the host value every 5min and if the value is different than the actual hostname as in "uname -n". It will automatically correct the host value, save the file and then restart splunk service automatically? cat /opt/splunk/etc/system/local/inputs.conf [default] host=iorper-spf52
I am looking this information to check the history of the modification made to a lookup file. If anyone can help me on this, it will be much appreciated!
Hello @Sambaing how to index these logs? Maybe your "source" field is not correctly defined.
Hi!  Thanks for taking the time, sadly this didn't work out for me.  Ideally if I can keep the same format of:  | timechart span=1s count AS TPS | eventstats max(TPS) as peakTPS | eval pea... See more...
Hi!  Thanks for taking the time, sadly this didn't work out for me.  Ideally if I can keep the same format of:  | timechart span=1s count AS TPS | eventstats max(TPS) as peakTPS | eval peakTime=if(peakTPS==TPS,_time,null()) | stats avg(TPS) as avgTPS first(peakTPS) as peakTPS first(peakTime) as peakTime | fieldformat peakTime=strftime(peakTime,"%x %X") With the addition of a couple lines for Min TPS and when it took place that would be ideal. 
Hello @uagraw01  what is `indextime`for you, is that macro? If yes try replacing by the content of the macro directly.
Hello @WanLohnston you can try something like this :   | timechart span=1d count(myfield) as nb_myfield | eventstats min(myfield) as min_fields max(myfield) as max_fields avg(myfield) as moy_fiel... See more...
Hello @WanLohnston you can try something like this :   | timechart span=1d count(myfield) as nb_myfield | eventstats min(myfield) as min_fields max(myfield) as max_fields avg(myfield) as moy_fields  
Might be. I'm not very strong on Cloud.