All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have multiple jobs that run throughout the day and they complete at different times with statusText of FAILURE, SUCCESS, TERMINATED I need to create an alert that would sent an email for all job... See more...
I have multiple jobs that run throughout the day and they complete at different times with statusText of FAILURE, SUCCESS, TERMINATED I need to create an alert that would sent an email for all jobNames that failed in the last 12 hours, with the timestamp. However, the alert should only be triggered once, upon the completion of a specific jobname (i.e once jobName test_job100 has a status of FAILURE or SUCCESS) All jobNames start with: “test_%” The specific jobname which should trigger the alert also starts with test_: test_job100 Below is an example of the log 2020-05-01 12:11:01.194, timestamp="2020-05-01 12:09:57.0", jobId="280568", jobName="test_job6", boxJobName=" ", eventCode="101", eventText="CHANGE_STATUS", statusCode="5", statusText="FAILURE", alarmCode="0", exitCode="1" Can someone advise how can I achieve this?
Hi, I have below scenario where I have query 1 which triggers the condition for my alert to fire if it fires then I want to send the O/P of 2nd query to send in email as tabular data. In that email... See more...
Hi, I have below scenario where I have query 1 which triggers the condition for my alert to fire if it fires then I want to send the O/P of 2nd query to send in email as tabular data. In that email subject I want to include the time duration of my 2nd query index=dte_fios sourcetype=dte2_Fios FT=*FT Error_Code!=0000 earliest=04/20/2020:11:00:00 latest=04/20/2020:13:00:00 | bin _time span=15m | stats count as Total, count(eval(Error_Code!="0000")) AS Failure by FT,_time | eval Failurepercent=round(Failure/Total*100) | table _time,FT,Total,Failure,Failurepercent | lookup ftthresholdlkp FT | eval alert=case(some condition) | where alert=1 | map search="search index=dte_fios sourcetype=dte2_Fios FT=$FT$ earliest=04/20/2020:12:45:00 latest=04/20/2020:13:00:00 | eval STime=strftime(earliest,"%m/%d %H:%M") , ETime=strftime(latest,"%m/%d %H:%M")| eval AlertType=if($Failurepercent$>50,"RED","AMBER")|table _time,WPID,MGRID,Host,System,DIP_Command,CID,DTE_Command,FT,OSS,Error_Code,Error_Msg" I am trying to send Subject of mail as "AMBER ALERT: Below are the failure from 04/20 12:45 TO 04/20 13:00 GMT GMT" to get I used $result.earliest$ $result.latest$ but they are coming as blank in my subject then I used eval command to create 2 fields STIME & ETime but if I add that in map search it is not returning any rows at all and also my AlertType is also not working Can someone help me out here how can achieve above subject using my query
I ran the latest Splunk's AppInspect API 2.1.0 using the Postman for the Splunk app we are developing. We have a setup.xml file located inside the default folder of the app project. After running... See more...
I ran the latest Splunk's AppInspect API 2.1.0 using the Postman for the Splunk app we are developing. We have a setup.xml file located inside the default folder of the app project. After running the AppInspect recently, I got an issue "Do not use default/setup.xml in the Cloud environment. Please consider use Authorization Code Flow for server-side web applications that can securely store secrets.". This error was not shown when I ran the app a week before. I guess this rule should be recently introduced in the Splunk AppInspect. Can someone please provide some info on where to place the setup.xml file for this to error to disappear and also work for Splunk Cloud as well?
i am creating a model for the prediction of license usage in our environment. tried many combination(around 25) of p,d,q for ARIMA, got few results which are best among the rest . i want to kno... See more...
i am creating a model for the prediction of license usage in our environment. tried many combination(around 25) of p,d,q for ARIMA, got few results which are best among the rest . i want to know is it possible to use auto-arima in splunk? if yes the please share the SPL. Thanks in advance
Hi Experts, I have a inputlookup file which consists of two fields i,e _time and names fields as shown below, _time names 02/02/2020 user1 ... See more...
Hi Experts, I have a inputlookup file which consists of two fields i,e _time and names fields as shown below, _time names 02/02/2020 user1 user2 08/02/2020 user1 user2 user3 10/02/2020 user2 I want to expand multi value filed i.e, names filed and show unique users list based on time, i tried |inputlookup filename.csv |stats values(names) as name |mvexpand name |dedup name |table name| sort - name but not worked for me,. Please help on this, Thanks in advance.
Hi I create report and share with users (join to ldap server). they can see report but when click on numbers that show on table do not have result. while admin user do the same thing and result load... See more...
Hi I create report and share with users (join to ldap server). they can see report but when click on numbers that show on table do not have result. while admin user do the same thing and result load successfully. seem users need to access "RootObject". FYI: 1-give read permission to all users. 2-report run as report owner. 3-give permission to users to access index. 4-give permission to users to access data model. 5-inspect job : INFO UserManager - Unwound user context: myuser-> NULL Any recommendation? Thanks,
Hello, I have ALERT field and in this field has different types ALERT values, so i want filter one of them counts if greater than 100 ALERT="LINK-3-UPDOWN" count=500 ALARM="IFNET/1/CRCERRORRI... See more...
Hello, I have ALERT field and in this field has different types ALERT values, so i want filter one of them counts if greater than 100 ALERT="LINK-3-UPDOWN" count=500 ALARM="IFNET/1/CRCERRORRISING" count =20 So I tried this but only show ALERT="LINK-3-UPDOWN" . I want see all values but "LINK-3-UPDOWN" filtered count |stats count by DATE,Region,managed_object,ALERT |where count>100 AND ALARM="LINK-3-UPDOWN" |sort -count -ALARM Regards,
hi i'm new to splunk, need help to write a query to get records and create a chart based on that . I am trying to combine 4 searches into one. all searches from same index and same source. 1. ind... See more...
hi i'm new to splunk, need help to write a query to get records and create a chart based on that . I am trying to combine 4 searches into one. all searches from same index and same source. 1. index=eventviewer sourcetype=applicationlog "#firsttry success" 2. index=eventviewer sourcetype=applicationlog "#firsttry failed" 3. index=eventviewer sourcetype=applicationlog "#secondtry success" 4. index=eventviewer sourcetype=applicationlog "#secondtry failed" logic in the log is im trying to upload files into db with 2 tries. Records failed in #firsttry will pushed again with #secondtry. firsttry faild count = #secondtry success count + #secondtry failed count. i needs to display a time chart by date in x axis and all the search count in y axis . Table should be like below _time | TOTALCOUNT |SUCCESS#1 |FAILED#1 |SUCCESS#2 |FAILED#2 2018-03-29 | 100 | 80 | 20 | 15 | 5 2018-03-30 | 60 |50 |10 |7 | 3 wanted to create chart to show all the 5 counts should display next to one another, when i click any one of the column in the chart it should display the correct events filtered by date. please help on this thanks in advance
Hello, everybody! Does anybody can help me understand why the following subsearch not limits the results of the outer search to events from only one host? index=_internal [search index=_intern... See more...
Hello, everybody! Does anybody can help me understand why the following subsearch not limits the results of the outer search to events from only one host? index=_internal [search index=_internal | top limit=1 host | table host] This is a test query, of course I have more meaningful subsearch and outer searches, my goal is to get a set of hosts from the subsearch and then query events from only these hosts in the outer search. Obviously the subsearch ends up with ( ( host="TheHostName" ) ) but I still get events from 100+ hosts from the outer search. Looks very strange to me.
Dear , Is there any way to deploy Splunk Deployment instance with HA concept I mean if DS1 down the DS2 will work without any interrupt , Thanks ,
I'm very interested in using the dashboard variable feature. However, the documentation on how to use it is sparse (ref: https://docs.appdynamics.com/display/PRO45/Dashboard+Variables). I've tried to... See more...
I'm very interested in using the dashboard variable feature. However, the documentation on how to use it is sparse (ref: https://docs.appdynamics.com/display/PRO45/Dashboard+Variables). I've tried to use it in both a metric and an ADQL query. In the metric, it lets me choose the variable in the application dropwdown (i have it as an application) - but the 3rd linked dropdown does not activate. Note that I do have a default set. In the ADQL query, I can't find a way to introduce the variable. My variable name is ChosenApp, which shows at the top as $ChosenApp, I've taken a valid query that produces data: SELECT series(eventTimestamp, "30m"), count(*) FROM transactions where application = "RealAppName" and tried to get it to take the variable - to no avail. Items I've tried replacing "RealAppName" with include: $ChosenApp "$ChosenApp" ChosenApp "ChosenApp" "{ChosenApp}" "{$ChosenApp}" {ChosenApp} {$ChosenApp} "@ChosenApp" @ChosenApp I'm out of ideas on both and google was no additional help. Is anyone able to provide further instruction on what to try?
We have cases when the indexing delays are up to 15 minutes, it's rare but it happens. In these cases, we see that the indexing queues are at 80 – 100 percent capacity on three of the eight indexers.... See more...
We have cases when the indexing delays are up to 15 minutes, it's rare but it happens. In these cases, we see that the indexing queues are at 80 – 100 percent capacity on three of the eight indexers. We see moderate bursts of data in these situations but not major bursts. These eight indexers use Hitachi G1500 arrays with FMD (flash memory drives). How can we understand better these situations and hopefully minimize the delays?
I have a table that has 2 columns with Transaction ID's shown by a stats values() as below: | stats values(E-TransactionID) as E-TransactionID values(R-TransactionID) as R-TransactionID I'd lik... See more...
I have a table that has 2 columns with Transaction ID's shown by a stats values() as below: | stats values(E-TransactionID) as E-TransactionID values(R-TransactionID) as R-TransactionID I'd like to compare the values of both columns and only show the Transaction ID's from R-TransactionID that does NOT appear in the E-TransactionID column. I've made the following attempts after the stats values() with no luck. Any help is GREATLY appreciated. Attempt 1 (had to try this anyway): | table R-TransactionID E-TransactionID | where R-TransactionID != E-TransactionID Attempt 2: | eval match=if(R-TransactionID=E-TransactionID, "EQUAL", R-TransactionID) | stats values(match) as TransactionID Attempt 3: | foreach R-TransactionID [eval match=if(R-TransactionID!=E-TransactionID, R-TransactionID, "MATCH")] | stats values(R-TransactionID) as R-TransactionID values(E-TransactionID) as E-TransactionID values(match) as TransactionID Attempt 4 (similar to previous, but with table instead | foreach R-TransactionID [eval match=if(R-TransactionID!=E-TransactionID, R-TransactionID, "MATCH")] | stats values(R-TransactionID) as R-TransactionID values(E-TransactionID) as E-TransactionID values(match) as TransactionID
I followed the Palo Alto Add-on instructions and installed the TA on the receiving HF and my distributed non-clustered indexers. I am noticing hundreds of parsed and extracted fields that are meani... See more...
I followed the Palo Alto Add-on instructions and installed the TA on the receiving HF and my distributed non-clustered indexers. I am noticing hundreds of parsed and extracted fields that are meaningless, i.e. fields created based on the parsing of a uri... It seems something is telling splunk to parse the logs on commas and equal signs for example. Has anyone else experienced this? Does anyone know how to fix this issue? I am using the 6.2.0 version of the TA. I also put the app and TA on the SH but that causes a useful field "src" to disappear. After disabling it the src field returned. It seems like the add-on is mis-configured, but need some advice on how to t-shoot this. Thank you
Is there a reference app built using Splunk Cloud Services SDK, similar to Splunk Reference App - PAS, except that it is not a App running on top of Splunk Enterprise, but an application in Java/Pyth... See more...
Is there a reference app built using Splunk Cloud Services SDK, similar to Splunk Reference App - PAS, except that it is not a App running on top of Splunk Enterprise, but an application in Java/Python/Go showcasing the features of Splunk Cloud Services SDK? Thanks,
In a custom code block given the following psuedo code: def promptIpToBlock(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None): ... See more...
In a custom code block given the following psuedo code: def promptIpToBlock(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None): phantom.debug('promptIpToBlock() called') # set user and message variables for phantom.prompt call user = phantom.get_run_data("logged_in_user") message = """Enter IP/CIDR addresses to be blocked""" #responses: response_types = [ { "prompt": "", "options": { "type": "message", }, }, ] phantom.prompt2(container=container, user=user, message=message, respond_in_mins=5, name="prompt_ip_to_block", response_types=response_types, callback=checkIpAgainstWhitelist) return def checkIpAgainstWhitelist(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None): myVar = phantom.get_run_data(key='prompt_ip_to_block') phantom.debug("myVar: {}".format(myVar)) # check for 'if' condition 1 matched_artifacts_1, matched_results_1 = phantom.condition( container=container, action_results=results, conditions=[ ["(phantom.valid_ip(promptIpToBlock:action_result.summary.responses.0) or phantom.valid_net(promptIpToBlock:action_result.summary.responses.0))", "==", "true"], ]) # call connected blocks for 'else' condition 4 join_formatBlockParamteres(action=action, success=success, container=container, results=results, handle=handle) return 'myVar' doesn't show up in checkIPAgainstWhitelist(). Am I using the correct API call to get the data from promptIpToBlock()?
Hi, We plan to deploy Splunk with indexer clustering (with 3 indexers) in our company. We know the hardware requirements for indexers and search heads, but it's not clear to us what recourses (CP... See more...
Hi, We plan to deploy Splunk with indexer clustering (with 3 indexers) in our company. We know the hardware requirements for indexers and search heads, but it's not clear to us what recourses (CPU Cores, RAM, Storage) are needed for Cluster Master, License Master, Deployment Server, Heavy Forwarders, DMC. Is there any recommendation for these components? Thanks!
I have 3 rows like below. I need to filter rows that equals current date. Current date being may 1. Plan Start Time May 01, 08:00 PM May 03 10:00 PM Apr 30 07:00 AM
Our URLs are not being extracted from our firepower logs. The url field always shows "unknown" even when there is a URL in the logs. Does anyone else have this issue? When I try to manually extract... See more...
Our URLs are not being extracted from our firepower logs. The url field always shows "unknown" even when there is a URL in the logs. Does anyone else have this issue? When I try to manually extract the URL using the field extractor it never seems to work, since the URL is sometimes in different locations in the logs, and I am not very good at regex so I can't seem to get it to work myself. If the URL extraction is working for you, can you please share what you have configured for that? Thank you! 4 sample events below--- rec_type=71 dns_resp_id=0 ips_count=0 ssl_cipher_suite=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ssl_version=TLSv1.2 ssl_rule_id=45 app_proto=HTTPS src_mask=0 ssl_server_cert_status="Invalid Issuer" sec_intel_event=No ssl_cert_fingerprint=8e9892de2bacb2060b1eb6d2ae732e489f86138a src_pkts=6 rec_type_simple=RNA ssl_expected_action="Do Not Decrypt" has_ipv6=1 num_ioc=0 sec_zone_ingress=inside ssl_flow_flags=75555521 referenced_host="" dest_autonomous_system=0 src_ip_country=unknown event_desc="Flow Statistics" mac_address=00:00:00:00:00:00 ssl_server_name="" ssl_flow_error=0 ssl_flow_messages=16408 fw_rule_reason=N/A iface_ingress=inside connection_id=64687 last_pkt_sec=1588603267 fw_policy=Default-Policy client_version="" url_reputation=Trusted dest_port=443 url_category="Business and Industry" event_type=1003 dns_ttl=0 sec_zone_egress=centurylink-outside ssl_policy_id=3f8afb02550111ecd74b9e4f4488bf9f dest_mask=0 ssl_actual_action="Do Not Decrypt" sensor=fwpr432432 http_response=0 first_pkt_sec=1588603241 dns_rec_id=0 dest_ip=170.146.102.193 ssl_session_id=0493d63c78766ac12c9b68f720ff528817429df265ae12419699be4b700f2219 dest_ip_country="united states" event_usec=0 dest_pkts=8 ssl_flow_status=Success file_count=0 legacy_ip_address=0.0.0.0 ip_proto=TCP user=bsmith ip_layer=0 monitor_rule_3=N/A src_autonomous_system=0 monitor_rule_1=N/A monitor_rule_7=N/A monitor_rule_6=N/A monitor_rule_5=N/A monitor_rule_4=N/A monitor_rule_2=N/A tcp_flags=0 http_referrer="" vlan_id=0 sec_intel_ip=N/A ssl_url_category=0 fw_rule_action=Allow url=https://workforcenow.adp.com netbios_domain="" src_ip=172.1.5.6 netflow_src=00000000-0000-0000-0000-000000000000 instance_id=2 fw_rule=allowed_traffic user_agent="" monitor_rule_8=0 snmp_in=0 dns_query="" iface_egress=outside event_subtype=1 event_sec=1588603268 dest_tos=0 security_context=00000000000000000000000000000000 src_port=57759 src_bytes=668 web_app="ADP Workforce Now" client_app="SSL client" src_tos=0 snmp_out=0 rec_type_desc="Connection Statistics" dest_bytes=627 sinkhole_uuid=00000000-0000-0000-0000-000000000000 ssl_ticket_id=0000000000000000000000000000000000000000 rec_type=71 iface_ingress=INTERNAL event_desc="Flow Statistics" sinkhole_uuid=00000000-0000-0000-0000-000000000000 url_reputation=Unknown src_pkts=5405 sensor=fp003 ssl_ticket_id=0000000000000000000000000000000000000000 event_subtype=1 dest_ip=23.204.249.25 dest_ip_country="united states" dns_resp_id=0 user="No Authentication Required" ssl_flow_status=Unknown fw_policy=00000000-0000-0000-0000-00005eabbd8c fw_rule_action=Allow instance_id=5 user_agent="" client_version="" snmp_out=0 sec_intel_ip=N/A http_referrer="" fw_rule_reason=N/A netbios_domain="" iface_egress=OUTSIDE src_ip_country=unknown url_category=Unknown sec_zone_ingress=INSIDE num_ioc=0 ssl_server_cert_status="Not Checked" ssl_flow_messages=0 ssl_flow_flags=0 last_pkt_sec=1588533371 rec_type_simple=RNA event_type=1003 ssl_rule_id=0 src_bytes=577 dest_bytes=14301799 dest_mask=0 legacy_ip_address=0.0.0.0 referenced_host="" event_usec=0 first_pkt_sec=1588533357 ssl_cert_fingerprint=0000000000000000000000000000000000000000 ssl_policy_id=00000000000000000000000000000000 ssl_session_id=0000000000000000000000000000000000000000000000000000000000000000 ips_count=0 snmp_in=0 dns_rec_id=0 ssl_url_category=0 file_count=0 app_proto=HTTPS web_app=Microsoft url=https://definitionupdates.microsoft.com sec_intel_event=No event_sec=1588533357 dest_pkts=10416 rec_type_desc="Connection Statistics" ssl_version=Unknown ssl_actual_action=Unknown src_autonomous_system=0 src_mask=0 tcp_flags=0 dest_tos=0 dest_autonomous_system=0 http_response=0 has_ipv6=1 ip_proto=TCP dns_ttl=0 security_context=00000000000000000000000000000000 netflow_src=00000000-0000-0000-0000-000000000000 src_tos=0 ssl_cipher_suite=TLS_NULL_WITH_NULL_NULL vlan_id=0 dest_port=443 ssl_expected_action=Unknown src_ip=172.1.2.3 monitor_rule_3=N/A src_port=54120 mac_address=00:00:00:00:00:00 sec_zone_egress=OUTSIDE client_app="SSL client" ip_layer=0 fw_rule=allowed_traffic ssl_flow_error=0 connection_id=38534 monitor_rule_8=0 dns_query="" monitor_rule_2=N/A ssl_server_name="" monitor_rule_1=N/A monitor_rule_6=N/A monitor_rule_7=N/A monitor_rule_4=N/A monitor_rule_5=N/A rec_type=71 sec_zone_ingress=internal_3 ssl_actual_action=Unknown rec_type_simple=RNA src_tos=0 src_pkts=64 security_context=00000000000000000000000000000000 sinkhole_uuid=00000000-0000-0000-0000-000000000000 mac_address=00:00:00:00:00:00 sec_intel_event=No src_ip_country=unknown sensor=fp03 fw_rule_reason=N/A client_version="" fw_policy=00000000-0000-0000-0000-00005eabbd17 sec_intel_ip=N/A src_ip=10.10.5.4 client_app="Web browser" file_count=0 ip_proto=TCP url_category=1048622 dest_mask=0 has_ipv6=1 ssl_session_id=0000000000000000000000000000000000000000000000000000000000000000 event_subtype=1 event_sec=1588485658 ssl_version=Unknown monitor_rule_8=0 ssl_policy_id=00000000000000000000000000000000 ssl_flow_flags=0 event_desc="Flow Statistics" ssl_url_category=0 src_mask=0 ssl_cert_fingerprint=0000000000000000000000000000000000000000 ips_count=0 instance_id=4 web_app=Microsoft rec_type_desc="Connection Statistics" fw_rule_action=Allow dns_ttl=0 dns_resp_id=0 dns_query="" iface_ingress=fp03 ssl_ticket_id=0000000000000000000000000000000000000000 user=Unknown tcp_flags=0 dest_bytes=222427 netflow_src=00000000-0000-0000-0000-000000000000 event_type=1003 first_pkt_sec=1588485658 dest_tos=0 src_autonomous_system=0 last_pkt_sec=1588485689 dest_autonomous_system=0 vlan_id=0 ssl_server_cert_status="Not Checked" sec_zone_egress=OUTSIDE iface_egress=OUTSIDE user_agent=Microsoft-Delivery-Optimization/10.0 dest_pkts=159 ssl_flow_status=Unknown url_reputation=Unknown dns_rec_id=0 legacy_ip_address=0.0.0.0 dest_ip_country="united states" snmp_in=0 snmp_out=0 src_port=55403 http_response=0 monitor_rule_6=N/A monitor_rule_7=N/A monitor_rule_4=N/A monitor_rule_5=N/A monitor_rule_2=N/A monitor_rule_3=N/A monitor_rule_1=N/A ssl_rule_id=0 ssl_server_name="" dest_port=80 ip_layer=0 ssl_flow_error=0 netbios_domain="" connection_id=53253 referenced_host=11.tlu.dl.delivery.mp.microsoft.com app_proto=HTTP src_bytes=4565 ssl_cipher_suite=TLS_NULL_WITH_NULL_NULL http_referrer="" num_ioc=0 ssl_expected_action=Unknown dest_ip=72.21.81.240 ssl_flow_messages=0 url="http://11.tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/" event_usec=0 fw_rule=Allowed_traffic rec_type=71 dns_resp_id=0 ips_count=0 ssl_cipher_suite=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ssl_version=TLSv1.2 ssl_rule_id=999999 app_proto=HTTPS src_mask=0 ssl_server_cert_status=Valid sec_intel_event=No ssl_cert_fingerprint=33b3b7e9da25f5a004e96435d6fb5477dbed27eb src_pkts=13 rec_type_simple=RNA ssl_expected_action="Do Not Decrypt" has_ipv6=1 num_ioc=0 sec_zone_ingress=inside ssl_flow_flags=75557313 referenced_host="" dest_autonomous_system=0 src_ip_country=unknown event_desc="Flow Statistics" mac_address=00:00:00:00:00:00 ssl_server_name="" ssl_flow_error=0 ssl_flow_messages=56 fw_rule_reason=N/A iface_ingress=inside connection_id=14354 last_pkt_sec=1588604153 fw_policy=default_policy client_version="" url_reputation=Trusted dest_port=443 url_category="Computers and Internet" event_type=1003 dns_ttl=0 sec_zone_egress=centurylink-outside ssl_policy_id=3f8afb02550111eab51b943gd488bf2d dest_mask=0 ssl_actual_action="Do Not Decrypt" sensor=fp03 http_response=0 first_pkt_sec=1588604043 dns_rec_id=0 dest_ip=52.114.132.73 ssl_session_id=ed370000c1e5509b2cdbcca12e3e2ebba8fb43fd26e10773c6fds7fdf2342b96 dest_ip_country="united states" event_usec=0 dest_pkts=20 ssl_flow_status=Success file_count=0 legacy_ip_address=0.0.0.0 ip_proto=TCP user=csmith ip_layer=0 monitor_rule_3=N/A src_autonomous_system=0 monitor_rule_1=N/A monitor_rule_7=N/A monitor_rule_6=N/A monitor_rule_5=N/A monitor_rule_4=N/A monitor_rule_2=N/A tcp_flags=0 http_referrer="" vlan_id=0 sec_intel_ip=N/A ssl_url_category=0 fw_rule_action=Allow url=https://self.events.data.microsoft.com netbios_domain="" src_ip=172.1.4.9 netflow_src=00000000-0000-0000-0000-000000000000 instance_id=4 fw_rule=Inside-access-out user_agent="" monitor_rule_8=0 snmp_in=0 dns_query="" iface_egress=outside event_subtype=1 event_sec=1588604154 dest_tos=0 security_context=00000000000000000000000000000000 src_port=63433 src_bytes=2833 web_app=Microsoft client_app="SSL client" src_tos=0 snmp_out=0 rec_type_desc="Connection Statistics" dest_bytes=6466 sinkhole_uuid=00000000-0000-0000-0000-000000000000 ssl_ticket_id=0000000000000000000000000000000000000000
Hi All, I need your helping in writing post process & base searches.. My dashboard requires a chart command in the first panel.. So having the post process search as below in first panel,I'm u... See more...
Hi All, I need your helping in writing post process & base searches.. My dashboard requires a chart command in the first panel.. So having the post process search as below in first panel,I'm unable to write the base searches in the following panels which requires stats,table and sometimes raw data in them.. 1st panel : Search id = "base" Query - index = source= | regex field1 | regex field2 | chart count over field1 by field2 2nd panel: I wanted to perform stats count by field 1 3rd panel : I want to display the raw events for the select value of field1 from the above panel 4th panel : I want to display stats count by field x Please suggest me how can i proceed with it. Will i be able to use streamstats or eventstats in the first panel or is there any other suggestions for this. Thanks in advance.