All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, My search results are formatted similar to that of HTML, eg: <last_modified_date>1669004771000</last_modified_date><assigned_group>Test Group 1</assigned_group><assigned_support_company>C... See more...
Hi all, My search results are formatted similar to that of HTML, eg: <last_modified_date>1669004771000</last_modified_date><assigned_group>Test Group 1</assigned_group><assigned_support_company>Company 1</assigned_support_company><assigned_support_organization>Analytics</assigned_support_organization><assignee>John Doe</assignee> I would like to split these results including their headings in <> I have tried rex and other commands, but am stuck
While investigating logs coming in from an OSSEC server I found that the `SPLUNK_TA_ossec` alters data erroneously. The investigated event is for Rule 18149 from a Windows server. The original user ... See more...
While investigating logs coming in from an OSSEC server I found that the `SPLUNK_TA_ossec` alters data erroneously. The investigated event is for Rule 18149 from a Windows server. The original user is `WINSERVER01$` - as we know a "machine account" as indicated by the trailing "$"-sign. The `SPLUNK_TA_ossec` (current version is 4.1.0) just strips off the dollar sign in `transforms.conf` in the `[kv_for_default_ossec]` stanza and shows the user as `WINSERVER01` just like a normal username. Now in a search that filters out machine accounts like `NOT user=*$` these accounts are shown and counted anyway. => Error
I have the following search queries:       API Error Alert --------------- index=myindex sourcetype=my-app:app |spath message | regex message="^.*Error while creating account.*$" |dedup my_id... See more...
I have the following search queries:       API Error Alert --------------- index=myindex sourcetype=my-app:app |spath message | regex message="^.*Error while creating account.*$" |dedup my_id_field API Down Alert --------------- index=myindex sourcetype=my-app:app | spath message | regex message="^.*api-down.*$" | dedup my_id_field Update API Error ------------------ index=myindex sourcetype=my-app:app | spath message | regex message="^.*Error while updating trial account.*$" | dedup my_id_field        I have some more of the same kind. It is checking against multiple messages using. regular expressions. Now I would like to create an email alert for all these events and would like combine all these into one query and so I can create a single alert rather than creating individual alerts. How can I combine these queries ? It should trigger the email alert if any of these conditions is true. I have tried the following, but it is not working.        index=myindex sourcetype=my-app:app |spath message | regex message="^.*Error while creating account.*$" | regex message="^.*api-down.*$"|regex message="^.*Error while updating trial account.*$" |regex message="^.*JWT Token creation failed with error.*$" |regex message="^.*Error while fetching IPLookU.*$"      
Hi everyone, currently, i am trying to expand one of the multiple field values but i am getting the result with the below error. Field 'deployment' does not exist in the data. index=json |rex... See more...
Hi everyone, currently, i am trying to expand one of the multiple field values but i am getting the result with the below error. Field 'deployment' does not exist in the data. index=json |rex mode=sed "s/.*-\s//g" |spath |rename ops{}.steps{}.steps{}.address{}.deployment as deployment  |mvexpand deployment |mvexpand operation |table deployment
Hi, We were using Splunk App for AWS which Splunk has stopped supporting and is now a legacy app. So as Splunk recommended to migrate to Splunk App for Content Packs(which includes Content Pack for ... See more...
Hi, We were using Splunk App for AWS which Splunk has stopped supporting and is now a legacy app. So as Splunk recommended to migrate to Splunk App for Content Packs(which includes Content Pack for AWS Reports and Dashboards) , migrating to this requires ITSI or IT Essentials Work as a pre-requisite to integrate and use the Content Packs. ITSI is a paid app and IT Essentials work has no version which is compatible with Splunk Enterprise 9.0. Why has Splunk demised a free app and as an alternative, we are forced to move to paid version (indirectly because of ITSI)
Hello hello, We have the Splunk db connect app working in our environment, but suddenly stops working And I can see this log: 2022-11-21 23:22:39.050 -0500 [dw-203047 - PUT /api/inputs/server_a... See more...
Hello hello, We have the Splunk db connect app working in our environment, but suddenly stops working And I can see this log: 2022-11-21 23:22:39.050 -0500 [dw-203047 - PUT /api/inputs/server_average_latency] ERROR c.s.d.m.repository.DefaultConfigurationRepository - action=failed_to_get_the_conf reason=HTTP 401 -- call not properly authenticated com.splunk.HttpException: HTTP 401 -- call not properly authenticated at com.splunk.HttpException.create(HttpException.java:84) at com.splunk.DBXService.sendImpl(DBXService.java:131) at com.splunk.DBXService.send(DBXService.java:43) at com.splunk.HttpService.get(HttpService.java:154) at com.splunk.Entity.refresh(Entity.java:381) at com.splunk.Entity.refresh(Entity.java:24) at com.splunk.Resource.validate(Resource.java:186) at com.splunk.Entity.validate(Entity.java:462) at com.splunk.Entity.getContent(Entity.java:157) at com.splunk.Entity.size(Entity.java:416) at java.util.HashMap.putMapEntries(HashMap.java:501) at java.util.HashMap.<init>(HashMap.java:490) at com.splunk.dbx.model.repository.JsonMapperEntityResolver.apply(JsonMapperEntityResolver.java:34) at com.splunk.dbx.model.repository.JsonMapperEntityResolver.apply(JsonMapperEntityResolver.java:18) at com.splunk.dbx.model.repository.DefaultConfigurationRepository.get(DefaultConfigurationRepository.java:92) at com.splunk.dbx.server.dbinput.task.DbInputTaskLoader.load(DbInputTaskLoader.java:63) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.update(InputServiceImpl.java:221) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.update(InputServiceImpl.java:38) at com.splunk.dbx.server.api.resource.InputResource.updateInput(InputResource.java:81) at sun.reflect.GeneratedMethodAccessor482.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43... CONTINUES   I am not going to add the whole log because is huge.   We have a cluster and our dbconnect is install at the search heads and all the inputs are configured at the Heavy Forwarder.   Is you have any idea what I can check to see what is the issue, please let me know.   The environment is over linux.   Thanks in advance. Best Regards.
Hello, I am using the Splunk Victoria experience and am attempting to install the n-1 version of an app to Splunk search head. When uploading the file for installation, i receive the following erro... See more...
Hello, I am using the Splunk Victoria experience and am attempting to install the n-1 version of an app to Splunk search head. When uploading the file for installation, i receive the following error.  This app is available for installation directly from Splunkbase. To install this app, use the App Browser page in Splunk Web. My question is how can install a non-current version of an app to Splunk? Or how to by pass this message to upload this app manually?  For Reference i am installing the 8.5.0 version of "Splunk Add-on for Unix and Linux", but this will need to be done for multiple apps.  Thank you 
We have a tool that writes to a cloud splunk indexer, but we are trying to migrate to a onprem system. The current system requires that we write to both at the same time, but unfortunately both index... See more...
We have a tool that writes to a cloud splunk indexer, but we are trying to migrate to a onprem system. The current system requires that we write to both at the same time, but unfortunately both indexers have setup different index names for the data. I've tried updating the files created by the spl install with the new tcp, but this solution seems to ignore one or the other index causing issues on the indexer in question. I've also tried having them as 2 different setups in the app directory, but then only one of the indexer receives information while the other is ignored. Is there a way to send the same data to 2 different instances, one in the cloud and one on prem, with each expecting a different index?
I'm collecting the System logs from a Windows 2012 R2 DHCP Server using Splunk Universal forwarder 9.0.1.0 to a Splunk Enterprise 8.2.5 indexer. Initially I was collecting all logs using this stanza ... See more...
I'm collecting the System logs from a Windows 2012 R2 DHCP Server using Splunk Universal forwarder 9.0.1.0 to a Splunk Enterprise 8.2.5 indexer. Initially I was collecting all logs using this stanza       [WinEventLog://System] start_from = oldest disabled = 0 current_only = 0       As per the inputs.conf guide here it states that for the allowed key values in an WindowsEvent log monitoring stanza we may use SourceName The source of the entity that generated the event. Corresponds to Source in Event Viewer.   I wished to reduce the number of events collected to only those related to DHCP. When I look into the windows event viewer I see this event for example However, if I search for any event in my indexed data from this server with        SoureName = DHCP-Server       there are no results. A simple query over all time like this shows this       index=windowsdhcp | stats count by SourceName       Howver, in the putput of the stats of all Sourcenames I see this Sourcename namely Microsoft-Windows-DHCP-Server. So I got back to windows event vierwer and look at the XML view of the event which is as follows:       - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Microsoft-Windows-DHCP-Server" Guid="{6D64F02C-A125-4DAC-9A01-F0555B41CA84}" EventSourceName="DhcpServer" /> <EventID Qualifiers="0">1376</EventID> <Version>0</Version> <Level>3</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2022-11-21T17:14:28.000000000Z" /> <EventRecordID>1236733</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>vmsys305.vhihealthcare.net</Computer> <Security /> </System> - <EventData> <Data>10.119.6.0</Data> <Data>89</Data> <Data>6</Data> </EventData> </Event>       There is no mention of the event with Source=Microsoft-Windows-DHCP-Server. The Providername is the same however. I then press the Copy button which yeilds the following pasted data (to notepad):       Log Name: System Source: Microsoft-Windows-DHCP-Server Date: 21/11/2022 21:13:18 Event ID: 1376 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: vmsys609.vhihealthcare.net Description: IP address range of scope 10.119.6.0 is 89 percent full with only 6 IP addresses available. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-DHCP-Server" Guid="{6D64F02C-A125-4DAC-9A01-F0555B41CA84}" EventSourceName="DhcpServer" /> <EventID Qualifiers="0">1376</EventID> <Version>0</Version> <Level>3</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2022-11-21T21:13:18.000000000Z" /> <EventRecordID>93904</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>vmsys609.vhihealthcare.net</Computer> <Security /> </System> <EventData> <Data>10.119.6.0</Data> <Data>89</Data> <Data>6</Data> </EventData> </Event>       So now we have our SourceName=Source mapping. Summary So this leads me to belive that the sttaement from this guide regarding the SourceName being equivelent to the Source as seen in Windows event viewer requires some clarifcaiton namely that it is as viewed from the output of the copy of the xml view of the event not the default output.  
we are using AWS ECS with fargate and trying to siphon out the container logs to out splunk cloud instance using fluentd.  1. on the aws ecs side, below is the task_definition.json to create servi... See more...
we are using AWS ECS with fargate and trying to siphon out the container logs to out splunk cloud instance using fluentd.  1. on the aws ecs side, below is the task_definition.json to create services in ECS -     { "family": "sample-springboot-ms-app", "taskRoleArn": "arn:aws:iam::958993399264:role/ecs-task-role", "executionRoleArn": "arn:aws:iam::958993399264:role/ecsTaskExecutionRole", "networkMode": "awsvpc", "containerDefinitions": [ { "name": "sample-springboot-ms-app", "image": "958993399264.dkr.ecr.us-east-1.amazonaws.com/dev-repository:finance-sample-springboot-ms-v1-0-0-700950146", "cpu": 0, "portMappings": [ { "containerPort": 8080, "hostPort": 8080, "protocol": "tcp" } ], "essential": true, "entryPoint": [], "command": [], "environment": [ { "name": "APP_CONFIG_VALUE", "value": "12" }, { "name": "START_UP_DELAY", "value": "9" }, { "name": "SIMPLE_TEST", "value": "sample-test-value" } ], "environmentFiles": [], "mountPoints": [], "volumesFrom": [], "secrets": [], "logConfiguration": { "logDriver": "awsfirelens" } }, { "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "debaspreet-debug-fluentd", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "splunk-ecs" } }, "image": "958993399264.dkr.ecr.us-east-1.amazonaws.com/dev-repository:fluent-701086531", "firelensConfiguration": { "type": "fluentd", "options": { "config-file-type": "file", "config-file-value": "/fluent.conf" } }, "essential": true, "name": "log_router", "memory": 256, "memoryReservation": 128 } ], "requiresCompatibilities": [ "FARGATE" ], "cpu": "1024", "memory": "2048", "runtimePlatform": { "operatingSystemFamily": "LINUX" } }         2. on the fluentd side, below is the fluent.conf -     <system> log_level info </system> <match **> @type splunk_hec protocol https hec_host **************** hec_port 8088 hec_token ***************** index debaspreet host_key ec2_instance_id source_key ecs_cluster sourcetype_key ecs_task_definition insecure_ssl true <fields> container_id container_name ecs_task_arn source </fields> <format> @type single_value message_key log add_newline false </format> </match>       3. below is the docker file for our custom fluend image that we host in ECR -     FROM splunk/fluentd-hec:1.2.0 ADD fluent.conf /fluent.conf     Despite of the above configs, we still dont see the container logs in splunk. Not sure whats incorrect in the config or whats missing. Out splunk cloud instance has been setup correctly because we do see the below post message there -     curl -k https://****************.com:8088/services/collector/event -H "Authorization: Splunk ****************" -d '{"event": "hello world"}'     Any pointers as to why this config isnt working ? Thanks  
My organization is struggling to successfully incorporate data science into existing security processes successfully. I'm having a hard time finding resources that help me assess the maturity level o... See more...
My organization is struggling to successfully incorporate data science into existing security processes successfully. I'm having a hard time finding resources that help me assess the maturity level of data science in my environment and how to mature it further with possible use cases and strategies to focus on. Does anyone know if there's any resources out there to help my organization head in the right direction?
Hi, Our system holds XML logs and the way it is structured, some of values are held inside a common set of name/value attribute pair which repeats number of times within the XML. Index name is 'a... See more...
Hi, Our system holds XML logs and the way it is structured, some of values are held inside a common set of name/value attribute pair which repeats number of times within the XML. Index name is 'applogs'. Example XML:       <RECORD> <ORDER> <OrderDate>21-11-2022</OrderDate> <OrderRef>12345678></OrderRef> <OrderAttributes> <OrderAttributeName>Attribute1</OrderAttributeName> <OrderAttributeValue>Value1<OrderAttributeValue> <OrderAttributeName>Attribute2</OrderAttributeName> <OrderAttributeValue>Value2<OrderAttributeValue> <OrderAttributeName>Attribute3</OrderAttributeName> <OrderAttributeValue>Value3<OrderAttributeValue> </OrderAttributes> </ORDER> </RECORD>       I want to extract the individual attributes to display in a table something like this: OrderDate OrderRef Attribute1 Attribute2 Attribute3 21-11-2022 12345678 Value1 Value2 Value3   I have tried SPATH but not able to pull Attribute1 & Value1 pair as there are multiple instances of OrderAttributeName & OrderAttributeValue tags, so have hit the buffers. Any suggestions on how can I make it work?
I have a dashboard with dropdown select. From this dropdown, once I select a api "/api/apiresponse/search/" then in the search results it will show this 2000-1-1 1:0:0.00 INFO : logType=API_RESPONSE... See more...
I have a dashboard with dropdown select. From this dropdown, once I select a api "/api/apiresponse/search/" then in the search results it will show this 2000-1-1 1:0:0.00 INFO : logType=API_RESPONSE, duration=100, request={"headers":"Accept":"application/json","Content-Type":"application/json"},"method":"POST", "body":{"body"},"parameters":{},"uri":"/api/apiresponse/search/"}, configLabel=, requestId=Thisoneismatching11111, response={"headers":{"statusCode":"OK"}, requestUri=/api/apiresponse/search/, threadContextId=Thisoneismatching22222, message=COMPLETED request /api/apiresponse/search/, source = /apps/logs/api_response.log sourcetype = response_log this is my search query for API_response index=main *_RESPONSE | spath input=request | spath input=response | lookup abc.csv uri OUTPUT opName | search Name="$Nme$" opName="$opeNme$" uri="$apis$" Downstream response log 2000-1-1 1:0:0.00 INFO logType=DOWNSTREAM_RESPONSE, duration=100, request={"headers":{"Accept":"application/json","Content-Type":"application/json"},"method":"POST", "body":{"uri":"https://abcdefg.com/downresponseservice/api/downresponse"}, configLabel=, requestId=Thisoneismatching11111, response={"OK":{"statusCode":"OK"}}, requestUri=https://abcdefg.com/downresponseservice/api/downresponse, threadContextId=Thisoneismatching22222, message=<<< Outbound REST response, source = /apps/logs/downstream_response.log sourcetype = response_log same way Is there a way to get downstream response? All the  Api logs and downstream logs has two matching field  requestId andthreadContextId I would need from the selected api, it has to get api_response logs and downstream logs related to only that api.
Hello,   I have a table with a custom Splunk Query, and a custom Click on an Cell.. work fine if I select to filter any filter than "Real Time" .  Using real time, this click doesn't work, appear... See more...
Hello,   I have a table with a custom Splunk Query, and a custom Click on an Cell.. work fine if I select to filter any filter than "Real Time" .  Using real time, this click doesn't work, appears to recreate the query every second, and when I click the action is not triggered.   How can I fix that?   Thanks
I’m looking to get in touch with the developer of the Splunk Add-on for Salesforce Streaming API to see if the source can be shared or made open source. Does anyone know how I can contact them?
pls i created this index summary and it was working. but when i checked data for the next day it doesnt show data.
Hi there, I would like to connect my ESET Server to SC4S to send syslog messages. I know that Eset is not listed on supported known vendors. Is it possible to connect Eset to SC4S?   thanks and r... See more...
Hi there, I would like to connect my ESET Server to SC4S to send syslog messages. I know that Eset is not listed on supported known vendors. Is it possible to connect Eset to SC4S?   thanks and regards, pawelF
Delay in index time and search time data.. There is a delay of 10 hours  index=test_shift "*10987867*" | eval indextime=strftime(_indextime,"%d/%m/%Y %H:%M:%S") | table _raw _time indextime ... See more...
Delay in index time and search time data.. There is a delay of 10 hours  index=test_shift "*10987867*" | eval indextime=strftime(_indextime,"%d/%m/%Y %H:%M:%S") | table _raw _time indextime _time is 2022-11-15 13:42:31 indextime is 2022-11-15 23:27:33 The environment is in cluster and if one indexer is down also , but the delay shouldnt be there . kindly suggest me what is the root cause so that i check in my environment Thanks in advance and your answer will be  helpful for me 
I am trying to compare a static column(Baseline) with multiple columns(hosts) and if there is a difference I need to highlight that cell in red   Component   BASELINE HOSTA HOSTB HOSTC GP... See more...
I am trying to compare a static column(Baseline) with multiple columns(hosts) and if there is a difference I need to highlight that cell in red   Component   BASELINE HOSTA HOSTB HOSTC GPU 20 20 5 7 GPU1 5 7 7 5 FW 2.4.2  2.4.2  2.4.2  2.4.3 IP 1.1.1.1 1.1.1.2 1.1.1.1 1.1.1.1 ID [234 , 336] [234 , 336] [134 , 336] [234 , 336]     <form theme="dark"> <label>Preos Firmware Summary - Liquid Cooled</label> <fieldset submitButton="false"> <input type="multiselect" token="tok_host" searchWhenChanged="true"> <label>Host</label> <valueSuffix>,</valueSuffix> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>index=pre Type=Liquid_Cooled | stats count by host | dedup host</query> <earliest>-90d@d</earliest> <latest>now</latest> </search> <default>*</default> <delimiter> </delimiter> <choice value="*">All</choice> </input> <input type="multiselect" token="tok_component" searchWhenChanged="true"> <label>Component</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Component</fieldForLabel> <fieldForValue>Component</fieldForValue> <search> <query>index=pre Type=Liquid_Cooled host IN ($tok_host$) "IB HCA FW" OR *CPLD* OR BMC OR SBIOS OR *nvme* OR "*GPU* PCISLOT*" OR *NVSW* | rex field=_raw "log-inventory.sh\[(?&lt;id&gt;[^\]]+)\]\:\s*(?&lt;Component&gt;[^\:]+)\:\s*(?&lt;Hardware_Details&gt;.*)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*CPLD\:\s*(?&lt;Hardware&gt;[^.*]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*BMC\:\s*version\:\s*(?&lt;Hardware1&gt;[^\,]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*SBIOS\s*version\:\s*(?&lt;Hardware2&gt;[^ ]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*nvme\d*\:.*FW\:\s*(?&lt;Hardware3&gt;[^ ]+)" | rex field=_raw "VBIOS\:\s*(?&lt;Hardware4&gt;[^\,]+)" | rex field=_raw "NVSW(\d\s|\s)FW\:\s*(?&lt;Hardware5&gt;(.*))" | rex field=_raw "IB\s*HCA\sFW\:\s*(?&lt;Hardware6&gt;(.*))" | eval output = mvappend(Hardware,Hardware1,Hardware2,Hardware3,Hardware4,Hardware5,Hardware6) | replace BMC WITH "BMC and AUX" in Component | search Component IN("*") | stats latest(output) as output latest(_time) as _time by Component host | fields - _time | eval from="search" | join Component [| inputlookup FW_Tracking_Baseline.csv | search Component!=*ERoT* Component!=PCIeRetimer* Component!="BMC FW ver" | table Component Baseline | eval from="lookup" | rename Baseline as lookup_output | fields lookup_output Component output] | stats count(eval(lookup_output==output)) AS case BY host Component output lookup_output | replace 1 WITH "match" IN case | replace 0 WITH "No match" IN case | stats values(Component) as Component by host lookup_output case output | stats count by Component | dedup Component</query> <earliest>-90d@d</earliest> <latest>now</latest> </search> <valueSuffix>"</valueSuffix> <delimiter> ,</delimiter> <valuePrefix>"</valuePrefix> </input> </fieldset> <row> <panel> <table> <search> <query>index=preos_inventory sourcetype = preos_inventory Type=Liquid_Cooled host IN ($tok_host$) "IB HCA FW" OR *CPLD* OR BMC OR SBIOS OR *nvme* OR "*GPU* PCISLOT*" OR *NVSW* | rex field=_raw "log-inventory.sh\[(?&lt;id&gt;[^\]]+)\]\:\s*(?&lt;Component&gt;[^\:]+)\:\s*(?&lt;Hardware_Details&gt;.*)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*CPLD\:\s*(?&lt;Hardware&gt;[^.*]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*BMC\:\s*version\:\s*(?&lt;Hardware1&gt;[^\,]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*SBIOS\s*version\:\s*(?&lt;Hardware2&gt;[^ ]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*nvme\d*\:.*FW\:\s*(?&lt;Hardware3&gt;[^ ]+)" | rex field=_raw "VBIOS\:\s*(?&lt;Hardware4&gt;[^\,]+)" | rex field=_raw "NVSW\d\s*FW\:\s*(?&lt;Hardware5&gt;(.*))" | rex field=_raw "IB\s*HCA\sFW\:\s*(?&lt;Hardware6&gt;(.*))" | eval output = mvappend(Hardware,Hardware1,Hardware2,Hardware3,Hardware4,Hardware5,Hardware6) | replace BMC WITH "BMC and AUX" in Component | stats latest(output) as output latest(_time) as _time by Component host | eval from="search" | fields - _time | chart values(output) by Component host limit=0 | fillnull value="No Data" | join Component [ | inputlookup FW_Tracking_Baseline.csv | search Component!=*ERoT* Component!=PCIeRetimer* Component!="BMC FW ver" | table Component Baseline | eval from="lookup" | fields Baseline Component output] | fields Component Baseline * | fillnull value="No Data"</query> <earliest>-90d@d</earliest> <latest>now</latest> </search> <option name="count">50</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>     
Hi I'm trying to update App Permissions to attach a role to the app however getting no luck currently.  Tried to use the .update() method to with correct parameters but doesn't look like it handles... See more...
Hi I'm trying to update App Permissions to attach a role to the app however getting no luck currently.  Tried to use the .update() method to with correct parameters but doesn't look like it handles the access parameters with 'read' and 'write.  Any other method this may work?