All Topics

Top

All Topics

Hello,   I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it.... See more...
Hello,   I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it. Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct. Last 50 related messages: 03-15-2024 08:14:15.748 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=34.216.133.150 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:15.530 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:15.296 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=44.231.134.204 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=44.231.134.204:9997 connid=0 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=35.162.96.25:9997 connid=0 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=34.216.133.150:9997 connid=0 03-15-2024 08:12:56.049 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2 This is my outputsconf [tcpout] defaultGroup = indexers [tcpout:indexers] server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com:9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com:9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997 forceTimebasedAutoLB = true autoLBFrequency = 40  
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various c... See more...
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various clients. For parsing windows logs the windows add-on is used which also provides a specific sourcetype. The problem is that for Windows clients we are unable to filter authentication events for: - Status (success/logoff/log failed) with EventCode:[4624->Logon success 4625->failure 4634->LogOff] - Account name. That is, we want to filter the logs that contain a certain substring in account name with the regex (always defining it within the whitelist where the event filter for the various eventcodes indicated above is contained). At present, events reach the master instance filtered only by eventcode rather than by eventcode and substring contained in the account name field. Could you help me?
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions sh... See more...
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions should give you the MAX value of a set of values, so in a thought scenario if you have 100 requests during 1 second the p95 should retrieve 95 of the requests with the lowest response time and out of these 95 requests it will pick out the highest response time as the p95 value. A thought scenario would be that the response time value of these 95 request were in the range of 50ms to 300ms. The p5 value would then be 300ms. I've used searches with p95 and p99 and thought this was correct but looking at the events I get out of both p95 and p99 the response time does not make any sense as this "300ms" value cannot be found, and very often I cannot find any close value to this number at all. Anyone that could enligthen me here in relation to the output I'm getting? Example of search: index=test host=server sourcetype=app_httpd_access AND "example" | bin _time span=1s | stats p99(A_1) as RT_p99_ms p95(A_1) as RT_p95_ms count by _time | eval RT_p95_ms=round(RT_p95_ms/1000,2) | eval RT_p99_ms=round(RT_p99_ms/1000,2)   p95 value output: 341,87ms Total number of values returned during 1 second for p95: 15 Response time output in ms (I was expecting value 341,87 on the TOP here but it's not present) : 343,69 330,675 329,291 301,369 279,018 246,719 106,387 103,216 100,232  44,794 44,496 42,491 38,974 38,336 34,201
我現在遇到一個問題,我在SH放置好一個apps並連到uf上,在uf上也有監控到資料路徑, 但我在search時就沒有辦法找 以下是我的 inputs.conf:     [monitor:///tutorialdata/www*/access.log] index = web host_segment=2 sourcetype = web:access [monitor:///tuto... See more...
我現在遇到一個問題,我在SH放置好一個apps並連到uf上,在uf上也有監控到資料路徑, 但我在search時就沒有辦法找 以下是我的 inputs.conf:     [monitor:///tutorialdata/www*/access.log] index = web host_segment=2 sourcetype = web:access [monitor:///tutorialdata/www*/secure.log] index = web host_segment=2 sourcetype = web:secure   以及props.conf   EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) [web:secure] EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)      
Hi All, I have a splunk cluster environment where, while pulling data from a source, itgets indexed twice, not as a separate event, but within same event. So all fields have same value coming twic... See more...
Hi All, I have a splunk cluster environment where, while pulling data from a source, itgets indexed twice, not as a separate event, but within same event. So all fields have same value coming twice , making it a multivalue field. Same source code works fine on a standalone splunk server but fails on a cluster.  I have tried to have props.conf present only in data app of indexer , however, with with that field extraction does not happen. If I keeps props.conf in both HF and data app, field extraction happens but with above issue. Would appreciate if anyone has any lead on this. TIA.
Thanks in Advance. 1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But ... See more...
Thanks in Advance. 1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But in the "content.List of Batches Processed{}.BatchID" we have 134 records. So i want to extract the multiple JSON values as field.From below logs i want to extract all the values from P_REQUEST_ID,P_BATCH_ID,P_TEMPLATE Query i tried to fetch the data | eval BatchID=spath("content.List of Batches Processed{}*", "content.List of Batches Processed{}.P_BATCH_ID"),Request=spath(_raw, "content.List of Batches Processed{}.P_REQUEST_ID")|table BatchID Request     "content" : { "List of Batches Processed" : [ { "P_REQUEST_ID" : "177", "P_BATCH_ID" : "1", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1r7", "P_BATCH_ID" : "2", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1577", "P_BATCH_ID" : "3", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "16577", "P_BATCH_ID" : "4", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }     . 
Hello All, I am getting the message in SH The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://x.x.x.x:8089, SH is trying to co... See more...
Hello All, I am getting the message in SH The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://x.x.x.x:8089, SH is trying to connect to the CM that is Standy. We have 2 CM, one of the CM is active and other one is StandBy, below are the configs in CM and SH. Could you please advice if anything is wrong in the config. Thanks! CM Config - [clustering] mode = manager manager_switchover_mode = auto manager_uri = clustermanager:dc1,clustermanager:dc2 pass4SymmKey = key multisite = true available_sites = site1,site2 site_replication_factor = origin:1,site1:1,site2:1,total:3 site_search_factor = origin:1,site2:1,site1:1,total:3 cluster_label = publisher_cluster access_logging_for_heartbeats = 0 cm_heartbeat_period = 3 precompress_cluster_bundle = 1 rebalance_threshold = 0.9 [clustermanager:dc1] manager_uri = https://x.x.x.x:8089 [clustermanager:dc2] manager_uri = https://x.x.x.x:8089 -- SH Config - [general] serverName = x.com pass4SymmKey = key site = site2 [clustering] mode = searchhead manager_uri = clustermanager:dc1, clustermanager:dc2 [clustermanager:dc1] multisite = true manager_uri = https://x.x.x.x:8089 pass4SymmKey = key [clustermanager:dc2] multisite = true manager_uri = https://x.x.x.x:8089 pass4SymmKey = key [replication_port://9100] [shclustering] conf_deploy_fetch_url = https://x.x.x.x:8089 mgmt_uri = https://x.x.x.x:8089 disabled = 0 pass4SymmKey = key replication_factor = 2 shcluster_label = publisher_shcluster id = B57109F1-5D63-4FC9-9BFC-BE6B0375D9A7 manual_detention = off Dhana
I've setup Splunk enterprise as a trial in a test domain however im having issues importing logs from different remote sources. Firstly it says connect to an LDAP before importing remote data. Tried ... See more...
I've setup Splunk enterprise as a trial in a test domain however im having issues importing logs from different remote sources. Firstly it says connect to an LDAP before importing remote data. Tried this however it wont connect to the domain, too many fields in here to fill in without giving examples. "Could not find userBaseDN on the LDAP server". I tried installing the Splunk forwarder on a Windows based DC and set the Splunk server forwarding and receiving to receive from port 9997. Then tried importing the host again and keep getting errors about WMI classes from host blah blah. Where is the documentation on setting up WMI for different remote sources? This piece should be easy. God help me when i try to add logs from networking devices. Real answers only please, no time wasters. Cheers,
i have a dashboard, In that there is a drop down for services. we have 10 panels in a dashboard. When i select service drop down my all 10 panels will get displays as per the service chosen from ... See more...
i have a dashboard, In that there is a drop down for services. we have 10 panels in a dashboard. When i select service drop down my all 10 panels will get displays as per the service chosen from the from drop down.  For example if i choose "passed services" from the drop down instead of showing all panels i want to see only panle1 to panel5  and hide panel 6 to panel 10  how i can do that? <form> <label>Services_Dashboard</label> <fieldset submitButton="true" autoRun="true"> <input type="time" token="time" searchWhenChanged="true"> <label> </label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <inputype type="dropdown" token="services" searchWhenChanged="true"> <label>Services</label> <choice value="*">all</choice> <choice value="Lgn_srvc">Login services</choice> <choice value="Fld_srvc">Failed services</choice> <choice value="Pass_srvc">passed services</choice> <choice value="Tmout_srvc">timeout services</choice> <choice value="Lgout_srvc">logout service</choice> <choice value="Err_srvc">error services</choice> <choice value="War_srvc">warning services</choice> <intialValue>*</intialValue> </input> </fieldset> <row> <panel> <title>panel1 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel2 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel3 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel4 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel5 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel6 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel7 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel8 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel9 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel10 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> </row> </form>  
I want to get pfsense logs to splunk to make some analysis. I tired this method "https://www.jaycroos.com/splunk-to-monitor-pfsence-logs/" but it didn't work for me. Now i want to try using Spl... See more...
I want to get pfsense logs to splunk to make some analysis. I tired this method "https://www.jaycroos.com/splunk-to-monitor-pfsence-logs/" but it didn't work for me. Now i want to try using Splunk universal forwarder, How can i install Splunk universal forwarder on my pfsense to get the logs to splunk ? Any guidance would be appreciated. please let me know if there is other method, where i can get my pfsense logs to the splunk server.
Hi Splunk Community, I need to create an alert that only gets triggered if two conditions are met. As a matter of fact, the conditions are layered: Search results are >3 in a 5-minute interval... See more...
Hi Splunk Community, I need to create an alert that only gets triggered if two conditions are met. As a matter of fact, the conditions are layered: Search results are >3 in a 5-minute interval. Condition 1 is true 3 times over a 15-minute interval. I thought I would create 3 sub-searches within the search and output the result in a "counter" and I would, then, run a search to identify if the "counter" values are >3: index=foo mal_code="foo" source="foo.log" | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-5m@m latest=now | stats count as event_count1 | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-10m@m latest=-5m@m | stats count as event_count2 | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-15m@m latest=-10m@m | stats count as event_count3 | search event_count*>0 | stats count as result I am not sure my time modifiers are working correctly, but I am not getting the results I expected. I would appreciate if I could get some advice on how to go about this.
I am trying to create a props.conf to pass a custom timestamp. To do so I wanted to upload data and use the set source type page to configure timestamp parameters and then copy the props.conf to clip... See more...
I am trying to create a props.conf to pass a custom timestamp. To do so I wanted to upload data and use the set source type page to configure timestamp parameters and then copy the props.conf to clipboard. However, the preview on this page does not update when I click "Apply Settings". The preview only changes when I select a new source type from the dropdown next to "Save As", changing anything in "Event Breaks", "Timestamp", or "Advanced" does nothing. Some thing to note is there is a red exclamation point in the top left saying "Can only preview uploaded files", unsure what this means. When I do save the data and search it, it DOES look like the source type changes I made took effect, but this really isn't a feasible way to test and configure my parameters. Any way to get this visible in "Add Data"s preview?
Need assistance with this, have installed the app, pointed to the address of the on prem server we have housing bluecat, ensured that account we created can login, has api access. I have pointed it t... See more...
Need assistance with this, have installed the app, pointed to the address of the on prem server we have housing bluecat, ensured that account we created can login, has api access. I have pointed it there, given credentials but nothing is being pulled from bluecat into splunk. A little assistance, I read the README files, and didnt help much.    Thanks, Justin
Hello, I'm attempting to change the sourcetype and host on a single event. The tricky part is I want the second transform based on the change from the first transform For Example, My data comes... See more...
Hello, I'm attempting to change the sourcetype and host on a single event. The tricky part is I want the second transform based on the change from the first transform For Example, My data comes in as   index=main host=heavy_forwarder sourcetype=aws:logbucket   I want the data to change to   index=main host=amazonfsx.host sourcetype=XmlWinEventLog   The catch is that I have other sourcetypes coming in as aws:logbucket and getting transformed to various other sourcetypes (cloudtrail, config, etc). On these events I do not want to run the regex to change the host value   If I have a props.conf file that states TRANSFORMS-modify_data = aws_fsx_sourcetype, aws_fsx_host And a transforms.conf of [aws_fsx_sourcetype] SOURCE_KEY = MetaData:Source REGEX = ^source::s3:\/\/fsxbucket\/.* FORMAT = sourcetype::XmlWinEventLog DEST_KEY = MetaData:Sourcetype [aws_fsx_host] REGEX = <Computer>([^.<]+).*?<\/Computer> FORMAT = host::$1 DEST_KEY = MetaData:Host   I'm worried this will have unexpected results on the other sourcetypes that aws:logucket has, like cloudtrail and config. If I break it out with two separate transforms, like this   TRANSFORMS-modify_data = aws_fsx_sourcetype TRANSFORMS-modify_data2 = aws_fsx_host   I'm worried the typing pipeline won't see the second transform. What is the most effective way to accomplish this?   Thanks, Nate
We are having difficulty getting exclusions of logs that have fields in Camelcase or have entries that have special characters related to OTEL logs. Fields without capitalization and/or special chara... See more...
We are having difficulty getting exclusions of logs that have fields in Camelcase or have entries that have special characters related to OTEL logs. Fields without capitalization and/or special character values are able to be parsed out, but not others. Here is an example log that we are looking at (attached as yaml and key portion).         filelog/kube-apiserver-audit-log: include: - /var/log/kubernetes/kube-apiserver.log include_file_name: false include_file_path: true operators: - id: extract-audit-group type: regex_parser regex: '\s*\"resourceGroup\"\s*\:\s*\"(?P<extracted_group>[^\"]+)\"\s*' - id: filter-group type: filter expr: 'attributes.extracted_beta == "batch"' - id: remove-extracted-group type: remove field: attributes.extracted_group - id: extract-audit-api type: regex_parser regex: '\"level\"\:\"(?P<extracted_audit_beta>[^\"]+)\"' - id: filter-api type: filter expr: 'attributes.extracted_audit_beta == "Metadata"' - id: remove-extracted-api type: remove field: attributes.extracted_api - id: extract-audit-verb type: regex_parser regex: '\"verb\"\:\"(?P<extracted_verb>[^\"]+)\"' - id: filter-verb type: filter expr: 'attributes.extracted_verb == "watch" || attributes.extracted_verb == "list"' - id: remove-extracted-verb type: remove field: attributes.extracted_verb The resourceGroup field is compared to something else and failing, verb and level are succeeding. Here is an example log that would be pulled in. {"apiVersion":"batch/v1","component":"sync-agent","eventType":"MODIFIED","kind":"CronJob","level":"info","msg":"sent event","name":"agentupdater-workload","namespace":"vmware-system-tmc","resourceGroup":"batch","resourceType":"cronjobs","resourceVersion":"v1","time":"2024-03-14T18:17:11Z"}
There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as specified below:  Realm Splunk APM & RUM Planned Maintenance Window app... See more...
There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as specified below:  Realm Splunk APM & RUM Planned Maintenance Window app.eu0.signalfx.com March 26, 2024 from 8.00 pm to 11.00 pm GMT app.jp0.signalfx.com March 27, 2024 from 5.00 am to 8.00 am JST (GMT+9) app.au0.signalfx.com March 27, 2024 from 7.00 am to 10.00 am AEDT (GMT+11) app.us0.signalfx.com March 27, 2024 from 8.00 pm to 11.00 pm ET (GMT-4) app.us1.signalfx.com March 28, 2024 from 5.00 pm to 8.00 pm PT (GMT-7)   During this maintenance window, you might experience delays in ingested data becoming available to view / query in areas of the Splunk APM & RUM interface that use Troubleshooting MetricSets, for example, Tag Spotlight and Service Map. Query performance may also be impacted. Any delayed data will not be lost during this time as it is durably queued for processing. You can find which realm you’re using by following the steps below: In the Observability Cloud main menu, select Settings. Select your user name at the top of the Settings menu. On the Organizations tab, you can view or copy your realm, organizations and organization IDs. Please note that the planned maintenance activity only applies to Splunk APM & RUM deployed in EU0, JP0, AU0, US0 and US1. Other realms or products and features of Splunk Observability Cloud will not be impacted by this maintenance window. For any questions, please reach out on the Splunk Support Portal to create a support case (select Get Started. > Create a Case > Select Support > Select Splunk Application Performance Monitoring).
Hello - Trying to create a query that will output additions to Azure security groups memberships. I am able to successfully output the information I need, but in the newValue field, it contains mult... See more...
Hello - Trying to create a query that will output additions to Azure security groups memberships. I am able to successfully output the information I need, but in the newValue field, it contains multiple different values. How do I omit the 'null' value and the security group IDs. I only want it to show that actual name of the security group. The way the logs are currently parsed, all of those values are in the same field - "properties.targetResources{}.modifiedProperties{}.newValue" Query:   index="azure-activity" | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", "properties.targetResources{}.modifiedProperties{}.newValue", operationName, _time    Output:    
I'm trying to build a query to give real time results for a value, but the is a time delay between the data send and indexed. This means when I do a realtime query for last 60s, I get 20s of data and... See more...
I'm trying to build a query to give real time results for a value, but the is a time delay between the data send and indexed. This means when I do a realtime query for last 60s, I get 20s of data and 40s of blank. I'd like to load the last 60s of recieved data in realtime, not the data recieved in the last 60s. Any ideas? I've tried index=ind sourcetype=src (type=instrument) | where temperature!="" | timechart span=1s values(temperature) and index=ind sourcetype=src (type=instrument) | where temperature!= NULL | timechart span=1s values(temperature) No luck with either
Hi, I want to extract value c611b43d-a574-4636-9116-ec45fe8090f8 from below. Could you please let me know how I can do using rex field=httpURL   httpURL: /peerpayment/v1/payment/c611b43d-a574-463... See more...
Hi, I want to extract value c611b43d-a574-4636-9116-ec45fe8090f8 from below. Could you please let me know how I can do using rex field=httpURL   httpURL: /peerpayment/v1/payment/c611b43d-a574-4636-9116-ec45fe8090f8/performAction
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the save... See more...
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the savedsearches.conf file, like: [Server*_Monitoring] dispatch.ttl=3p I restarted the search head after the change, but it didn't work. Is there any way to avoid listing all the searches explicitly in savedsearches.conf? Thank you!