All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That'll depends on what you want to show for your risk score.  Do you want to show max?  Do you want to show avg?  Is there a groupby field you want to use? Note the excerpted examples from the docu... See more...
That'll depends on what you want to show for your risk score.  Do you want to show max?  Do you want to show avg?  Is there a groupby field you want to use? Note the excerpted examples from the document are very specific to the problem the examples are trying to illustrate.  It is not a substitute for you to describe your desired output.  If you don't tell people, volunteers would have no way to read your mind. In the simplest form, you can experiment with something like   | chart avg('event.Properties.riskScore') max('event.Properties.riskScore') min('event.Properties.riskScore') stdev('event.Properties.riskScore')   But you already did this.  So, what is your desired output?  Alternatively, what is the use case you are trying to apply?  What is the business problem you are trying to solve/illustrate using this dashboard?
Hi Thanks for the responce  yes, i have gone through aggregate function, could you please help how to implement in the code  Exp 1 -  chart eval(avg(size)/max(delay)) AS ratio BY host user   OR   ... See more...
Hi Thanks for the responce  yes, i have gone through aggregate function, could you please help how to implement in the code  Exp 1 -  chart eval(avg(size)/max(delay)) AS ratio BY host user   OR      timechart eval(round(avg(cpu_seconds),2)) BY processor      
max(), avg() and stdev() are all aggregation functions which you can include on a stats command in your search https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Aggregatefunctions ... See more...
max(), avg() and stdev() are all aggregation functions which you can include on a stats command in your search https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Aggregatefunctions  
Hi  I am not sure about this value risk score.  How do i create dashboard tile for this fields  
Seems like a glaring omission and no answer in 9 months.  
@marnallHas better eyes than me and spotted the mix of italics and non-italics in the bracketed text.  The final regex likely will be a combination of our suggestions.
I would highly recommend the website https://regex101.com/ as it allows you to see previews of your regex extractions as you write them.  This regex might work: on the remote host:\n\n(?<Vendor>[^\... See more...
I would highly recommend the website https://regex101.com/ as it allows you to see previews of your regex extractions as you write them.  This regex might work: on the remote host:\n\n(?<Vendor>[^\[\s]*)\s(?<Software>[^\[\s]*)\s*\[version\s(?<Version>[^\]]*)\]\s\[installed on (?<Date>[^\]]*)\]  
This regular expression works in regex101.com using the sample data. | rex field=pluginText "host:\s+(?<vendorSoftware>.+?)\s+\[(?<version>[^\]]+)] \[(?<installedDate>[^\]]+)" It looks for the "hos... See more...
This regular expression works in regex101.com using the sample data. | rex field=pluginText "host:\s+(?<vendorSoftware>.+?)\s+\[(?<version>[^\]]+)] \[(?<installedDate>[^\]]+)" It looks for the "host" introductory text and skips the spaces which follow.  The next set of text (terminated by whitespace before a left bracket) is the software name.  The text in the two sets of brackets become the version and date, respectively.
Unfortunately you'd have to change the python code to do this, at least for the timer app. If you open the timer_connector.py in $SOAR_HOME$/apps/timer_*/, then it formats the event name with this fu... See more...
Unfortunately you'd have to change the python code to do this, at least for the timer app. If you open the timer_connector.py in $SOAR_HOME$/apps/timer_*/, then it formats the event name with this function: def _format_event_name(self): config = self.get_config() event_name = self._handle_py_ver_compat_for_input_str(config['event_name']) iso_now = datetime.datetime.now(pytz.utc).isoformat() label_name = config.get('ingest', {}).get('container_label', '') event_name = re.sub( r'(^|[^0-9a-zA-Z]+)(\$now)($|[^0-9a-zA-Z]+)', r'\g<1>{}\g<3>'.format(iso_now), event_name ) the time for the $now token is obtained from datetime.datetime.now(pytz.utc).isoformat() , explicitly setting the timezone to UTC. I have not looked for the code writing the activity log but would assume it is the same.
Looks like a networking issue. Your SOAR instance cannot find a network route to 192.168.208.144. Some things you could try are: Ping or interact with that ip address via any protocol, to see if i... See more...
Looks like a networking issue. Your SOAR instance cannot find a network route to 192.168.208.144. Some things you could try are: Ping or interact with that ip address via any protocol, to see if it connects outside of SOAR. Ping or interact with another IP address via any protocol, to see if networking is working on the SOAR machine. Check that your SOAR instance and 192.168.208.144 are in the same subnet, or have switches that can carry traffic between them.
I also expected the LOG field to be extracted.  Were the changes to props/transforms installed on the first full Splunk instance the sees the data?  Was that instance restarted?  Is the screenshot sh... See more...
I also expected the LOG field to be extracted.  Were the changes to props/transforms installed on the first full Splunk instance the sees the data?  Was that instance restarted?  Is the screenshot showing new data (since the restart)?
In SOAR, the webserver is nginx. It has a configuration file at: ($SOARDIR$= your soar/phantom install directory. E.g. /opt/phantom or /data)  $SOARDIR$/usr/nginx/conf/phantom-nginx-server.conf ...... See more...
In SOAR, the webserver is nginx. It has a configuration file at: ($SOARDIR$= your soar/phantom install directory. E.g. /opt/phantom or /data)  $SOARDIR$/usr/nginx/conf/phantom-nginx-server.conf ... which includes the config in conf.d: $SOARDIR$/usr/nginx/conf/conf.d/phantom-nginx-server.conf which sets the SSL options: ssl_certificate /opt/phantom/etc/ssl/certs/httpd_cert.crt; ssl_certificate_key /opt/phantom/etc/ssl/private/httpd_cert.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ssl_session_cache shared:TLS:2m; ssl_dhparam /opt/phantom/etc/ssl/dhparams.pem;  
Unfortunately, those searches are of different types (one starts with a streaming search command, the other with report-generating tstats command) which means you can't combine them into one search a... See more...
Unfortunately, those searches are of different types (one starts with a streaming search command, the other with report-generating tstats command) which means you can't combine them into one search and process combined results or use multisearch to run both in parallel. You're limited to either using join command as you attempted or appending one results set to another and then do some summarizing stats. Having said that - I don't quite get how you imagine your desired output since the tstats will split results by three fields whereas your raw index search returns stats split only by host.
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host:... See more...
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host: Vendor Software  [version versionnumber] [installed on date] ... ... ... </plugin_output> I wish to extract out Vendor, Software and versionnumber to separate fields and require a rex to do so. I am unfamiliar with using rex on this type of list, so I was hoping someone could point me in the right direction
Not only is it possible, it's mandatory.  You don't have to worry about it, though, because Splunk manages it for you.
Not able to get that to work
Even using a field that has defined IP values doesn't work and returned the following error: "Streamed search execute failed because: Error in 'ipdetection' command: External search command exited u... See more...
Even using a field that has defined IP values doesn't work and returned the following error: "Streamed search execute failed because: Error in 'ipdetection' command: External search command exited unexpectedly with non-zero error code 1.." This works but you can't pass values to it within a query: | ipqualityscore field="IP Address" value="8.8.8.8"
Are you able to use the join command based on host? <search 1> | join host [<search 2>]
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count whe... See more...
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count where index=* by host, index, sourcetype | addtotals | sort -Total | fields - Total | rename count as events_latest_hour Second search is showing the ingest per hour in GB by host.  (index=_internal host=splunk_shc source=*license_usage.log* type=Usage) | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour | addtotals | sort -Total | fields - Total Can you please help with how i would join these two searches together to display the host, index, sourcetype, events_latest_hour,  usage_lastest_hour Basically i want to table the results of the first search and also include the results "usage_lastest_hour"from the second search into the table.   
Hi, regex _raw is here the wrong command… regex - Splunk Documentation but rex seems wrong too rex - Splunk Documentation because it can't do a key value extraction in search. I found an odd ... See more...
Hi, regex _raw is here the wrong command… regex - Splunk Documentation but rex seems wrong too rex - Splunk Documentation because it can't do a key value extraction in search. I found an odd way tho handle this: | spath | rename _raw AS temp date AS _raw | extract pairdelim="|" kvdelim="=" | rename _raw as date temp as _raw reference: extract - Splunk Documentation Is this what you are searching for? Kind Regards