All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I ran the following code - from __future__ import print_function import urllib.request, urllib.parse, urllib.error import httplib2 from xml.dom import minidom baseurl = '<url>' userName =... See more...
Hello, I ran the following code - from __future__ import print_function import urllib.request, urllib.parse, urllib.error import httplib2 from xml.dom import minidom baseurl = '<url>' userName = '<username>' password = '<password>' searchQuery = <query> # Authenticate with server. # Disable SSL cert validation. Splunk certs are self-signed. serverContent = httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/auth/login', 'POST', headers={}, body=urllib.parse.urlencode({'username':userName, 'password':password}))[1] sessionKey = minidom.parseString(serverContent).getElementsByTagName('sessionKey')[0].childNodes[0].nodeValue # Remove leading and trailing whitespace from the search searchQuery = searchQuery.strip() # If the query doesn't already start with the 'search' operator or another # generating command (e.g. "| inputcsv"), then prepend "search " to it. if not (searchQuery.startswith('search') or searchQuery.startswith("|")): searchQuery = 'search ' + searchQuery print(searchQuery) # Run the search. # Again, disable SSL cert validation. print(httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/search/jobs','POST', headers={'Authorization': 'Splunk %s' % sessionKey},body=urllib.parse.urlencode({'search': searchQuery}))[1]) I get this error - "TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond" Is my url format wrong? Thanks
Hello,  How can I get my eval case like to match all values  except a  specific value ? I have below values for a field called rule_name MMT01_windows_brute_force MMT02_linux_root_login MMT03... See more...
Hello,  How can I get my eval case like to match all values  except a  specific value ? I have below values for a field called rule_name MMT01_windows_brute_force MMT02_linux_root_login MMT03_Aws_guardduty_alert How to get eval to match everything except anything with AWS in the name ? I need to use wildcard % for the matching part because there r many matches but just exclude AWS ones. I  found a similar post here where the answer was to user AND! To exclude  But that syntax is no longer supported it seems. | eval rule_type= case(like(rule_name,"MHE0%"),onprem,cloud) Expected result: rule_type should end up having 2 values for MMT01 and 02  using a wildcard and MMT03 should be  considered as cloud
Hello, I send a GET request to Postman as follows - curl -u <username> -k https://<url>.net:8089/services/jobs/export -d search="<query>" Why does it fail? "Cloud Agent Error: Couldn't resolve hos... See more...
Hello, I send a GET request to Postman as follows - curl -u <username> -k https://<url>.net:8089/services/jobs/export -d search="<query>" Why does it fail? "Cloud Agent Error: Couldn't resolve host. Make sure the domain is publicly accessible or select a different agent." And a variation passes but while I add "-d output_mode csv" at the end, I do not get any csv. Where can I see the same result as I see inside Splunk (enterprise) i.e tabular output? Thanks
Hi,   I have a table with dynamic fields, some of these fields contain no value or NULL, how do I remove these fields when I dont know the field name beforehand?   The field names are never the s... See more...
Hi,   I have a table with dynamic fields, some of these fields contain no value or NULL, how do I remove these fields when I dont know the field name beforehand?   The field names are never the same so I cannot simply do | fields - name1, name2 etc..   Is there are way to remove every field containing no value in a table?
I am trying to ingest data from Cortex via API, the API works 100% but getting the following script errors in splunkd.log Also attached the log from my partners environment where we need to comple... See more...
I am trying to ingest data from Cortex via API, the API works 100% but getting the following script errors in splunkd.log Also attached the log from my partners environment where we need to complete the integration.  8-14-2024 10:30:27.459 +0200 ERROR ScriptRunner [12760 TcpChannelThread] - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py execute':    return func(*args, **kwargs) 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\utils.py", line 153, in wrapper 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     return func(*args, **kwargs) 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\credentials.py", line 137, in get_password 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     f"Failed to get password of realm={self._realm}, user={user}." 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#Splunk_TA_paloalto#configs/conf-splunk_ta_paloalto_settings, user=proxy. 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: . 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\utils.py", line 153, in wrapper 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     return func(*args, **kwargs) 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\credentials.py", line 137, in get_password 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     f"Failed to get password of realm={self._realm}, user={user}." 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#Splunk_TA_paloalto#configs/conf-splunk_ta_paloalto_settings, user=additional_parameters. Please advise.  Palo Alto Cortex XDR Palo Alto Networks Add-on for Splunk 
Hi  We have successfully configure dashboard for the ups monitoring , however the dashboard was working fine with no issue. From 1st /08/2024 no data are showing up in the tile.    Checked UF & s... See more...
Hi  We have successfully configure dashboard for the ups monitoring , however the dashboard was working fine with no issue. From 1st /08/2024 no data are showing up in the tile.    Checked UF & services - All working with no issue, restarted the service issue not resolved Checked splunk index , can find the latest event is 17 days not sure what is the problem    could you please advice the issue 
Rather than using the subsearch syntax with append | append [ | inputlookup ... ] use the native | inputlookup append=t which has no subsearch limitations. You also don't need the redundant fi... See more...
Rather than using the subsearch syntax with append | append [ | inputlookup ... ] use the native | inputlookup append=t which has no subsearch limitations. You also don't need the redundant fields command as it will be removed with the stats, so  index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | inputlookup append=t compliance.csv | stats first(Status) AS Status BY Solution | outputlookup compliance.csv  
Depends on the lookup type. If your lookup is a csv-file based one, you can't update it. The only thing you can do, as was shown by @gcusello , is to overwrite whole lookup with updated contents.
Hi @whrg , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points... See more...
Hi @whrg , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @MoeTaher , yes correct (I'm sorry!): index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields - count | append [ | inputlookup compliance.csv... See more...
Hi @MoeTaher , yes correct (I'm sorry!): index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields - count | append [ | inputlookup compliance.csv | fields Solution Status ] | stats first(Status) AS Status BY Solution | outputlookup compliance.csv  Ciao. Giuseppe
Thanks @gcusello , How do I replace join with stats as I am taking data from other tables
Hi @eherbst63 , good for you, see next time! let us know if we can help you more, or, please, accept one answer (also you one) for the other people of Community. Ciao and happy splunking Giuseppe... See more...
Hi @eherbst63 , good for you, see next time! let us know if we can help you more, or, please, accept one answer (also you one) for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @wm , probably because you're using many join commands and this command uses subsearches: subsearches have the limit of 50,000 results, so probably the match between the subsearches are less beca... See more...
Hi @wm , probably because you're using many join commands and this command uses subsearches: subsearches have the limit of 50,000 results, so probably the match between the subsearches are less because there are less results than the ones that should be. Splunk isn't a database so you cannot use the approach thet you usually use in a query, in other words, avoid join command and correlata searches using the stats command. In addition, using join, yousurela have a very slow search. Search in Community and you'll find many examples of replace of join with stats. Ciao. Giuseppe
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "op... See more...
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | sort ep_timestamp | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Done incremental update" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - booking_timestamp_unix | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | dedup sky_id | sort booking_timestamp | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_to_sky_latency = catchup_unix_time - CIMsendingTime_unix | where len(CIMsendingTime) > 0 | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_latency = round(ep_latency * 1000,0) | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, wss_to_sky_latency, cim_latency, distributor_latency, ep_latency, mq_to_sky_update_latency, distributor_to_sky_latency, mx_status, operation, action My above current search query but i get more events and less statistics results in last 24 hours period and compared to last 4 hours period.
Am getting same error. Do we know the fix for this issue ?   Regards
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superher... See more...
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superhero", the index "superheroes" will contain only events with id=superman, batman and archetype="villain" will only contain event with id="joker"? The reasoning is I want to set permissions on the sub-indexes so only specific users can see the index (e.g. only people with role "good guys" can see superhero data).  I have tried summary indexing with the following query, scheduled the search, and enabled summary indexing but it doesn't capture the original fields in the data.  index=characters | fields id, strengths, archetype | where archetype="superhero" | eventstats count as total_superheroes | table id, strengths, archetype Sample Json Data: [ { "id": "superman", "strengths": "super strength, flight, and heat vision", "archetype": "superhero" }, { "id": "batman", "strengths": "exceptional martial arts skills, detective abilities, and psychic abilities", "archetype": "superhero" }, { "id": "joker", "strengths": "cunning and unpredictable personality", "archetype": "villain" } ]
See transaction.  Because the sample dataset is small, and they do not start at the top of a cycle, I wanted to show results from incomplete transactions.  You need to analyze real data to see which ... See more...
See transaction.  Because the sample dataset is small, and they do not start at the top of a cycle, I wanted to show results from incomplete transactions.  You need to analyze real data to see which options are right for your use case.
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial ... See more...
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial Number,CLEI,Pmax(W),Imax(A) these fields all vault, can some one help me please, thank you very much.  
Hi @yuanliu , may I know what does keepevicted=t do and what happens if we dont use it?
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Window... See more...
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Windows hundreds and even 000's of times. I'd like to check how this data table is being populated by CM?