All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have scan dataset. It has a field name TEXT. This field contains the data for test results. I am provided a data from SQL table which I am ingesting via DBConnect. This also contains TEXT fiel... See more...
Hi, I have scan dataset. It has a field name TEXT. This field contains the data for test results. I am provided a data from SQL table which I am ingesting via DBConnect. This also contains TEXT field but along with Priorities to assign to. I am using data from SQL to create a saved search that outputs into a lookup. My issue is that the data that is in the lookup has TEXT field with wildcard (%). I am not able to match what is in the raw data with what is in the lookup because of wildcard. Sample: In raw data: TEXT= Cas is required to perform this test. In lookup: TEXT= %Cas is required to perform this test.% %Cas is required% %Cas is required % this test.% % Test search: index=sample sourcetype=sample_vuln TEXT IN ("*Cas is required*") | stats count by TEXT | lookup sample_rules TEXT AS TEXT OUTPUT TEXT AS TEXT_Test, Priority How can I match the data in raw with the data in lookup containing same field values with wildcards in above combinations. I did edited the lookup's definition and configured Match Type option as WILDCARD(TEXT) and this is not helping. Thanks in advance!!!
I want export uploaded data (from add data) option and then wish to push that data to AWS s3 bukcet. How can  do it..  I see this link https://docs.splunk.com/Documentation/DSP/1.2.0/FunctionReferenc... See more...
I want export uploaded data (from add data) option and then wish to push that data to AWS s3 bukcet. How can  do it..  I see this link https://docs.splunk.com/Documentation/DSP/1.2.0/FunctionReference/S3 .. But no help from it. Please help me  
\Hi,   I have some data which looks likes this from a Splunk report: Server Prod1-Ver Prod1-Latest Prod2-Ver Prod2-Latest server1.com 11.7.1.2 11.7.1.2 8.2 8.3 server2.com 11.3.1... See more...
\Hi,   I have some data which looks likes this from a Splunk report: Server Prod1-Ver Prod1-Latest Prod2-Ver Prod2-Latest server1.com 11.7.1.2 11.7.1.2 8.2 8.3 server2.com 11.3.1.0 11.3.1.2|11.5.1.1 6.8 6.10|7.9 server3.com 11.7.0.2 11.7.1.1 7.4 6.10|7.9   I want to be able to compare the Prod-Ver to the corresponding Prod-Latest. Some of the latest ones will have different products for different point releases or, in the above example, 11.3 or 11.5 can be used, in that example, I need to check the 11.3.1.0 against the 11.3.1.2 and not the 11.5.1.1. I figure I will need to use mvexpand and have multiple rows, then exclude the ones not matching the higher point level.  Is there an easier way of doing it? to do what I suggested, I think I would need to have 4 versions of the same line (more when there are other products) to cover all combinations, i.e. server2.com line in above.   Thanks.
Hi All, I know this question is very generic, but I will try asking We have 2 sites with this Indexing Tier configuration: Site 01 3 Indexers (dedicated VM with 2x20 physical core CPU Intel X... See more...
Hi All, I know this question is very generic, but I will try asking We have 2 sites with this Indexing Tier configuration: Site 01 3 Indexers (dedicated VM with 2x20 physical core CPU Intel Xeon 6148 and 96GB RAM each) SF=2, RF=2 Site 02 3 Indexers (dedicated VM with 2x20 physical core CPU Intel Xeon 6148 and 96GB RAM each) SF=2, RF=2 Search Heads are configured with site affinity so searches go on both sites. Machines are quite new, supposing you don't have any limitation on the storage tier, how many IOPS are you expecting pushing them to the limit? Thanks a lot, Edoardo
Hi  Everyone, I have one requirement.  As of now I have put Auto Refresh for my panel. My requirement is I want to create a checkbox for Auto Refresh which when  checked will start Auto Refresh fo... See more...
Hi  Everyone, I have one requirement.  As of now I have put Auto Refresh for my panel. My requirement is I want to create a checkbox for Auto Refresh which when  checked will start Auto Refresh for every 5 sec and will close in 5 minutes automatically. By Default the check box should ne unchecked. Below is my code: <form theme="dark"> <label> Process Dashboard Auto Refresh</label> <fieldset submitButton="true" autoRun="true"> <input type="time" token="field1" searchWhenChanged="true"> <label>Date/Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="text" token="process_tok1"> <label>Processor Id</label> <default>*</default> </input> <input type="text" token="ckey" searchWhenChanged="true"> <label>Parent Chain</label> <default></default> <prefix>parent_chain="*</prefix> <suffix>*"</suffix> <initialValue></initialValue> </input> <input type="text" token="usr"> <label>User</label> <default>*</default> </input> <input type="checkbox" token="auto"> <label>Auto Refresh</label> </input> </fieldset> <row> <panel> <table> <search> <query>index=abc  sourcetype=xyz  source="user.log" $process_tok1$ | rex field=_raw "(?&lt;id&gt;[A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})" | join type=outer id [inputlookup parent_chains_e1.csv]|search $ckey$|search $usr$|eval ClickHere=url|rex field=url mode=sed "s/\\/\\//\\//g s/https:/https:\\//g" | table _time _raw host id parent_chain url</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5s</refresh> <refreshType>delay</refreshType> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <condition field="url"> <link target="_blank">$row.url|n$</link> </condition> </drilldown> </table> </panel> </row> </form> Can someone guide me how to achieve
Hi Team, I have one Requirement. I have the below raw Logs that are coming different format . Below are some Examples: 2021-02-12 09:22:32,936 INFO [ Web -4092] AuthenticationFilter Attempting req... See more...
Hi Team, I have one Requirement. I have the below raw Logs that are coming different format . Below are some Examples: 2021-02-12 09:22:32,936 INFO [ Web -4092] AuthenticationFilter Attempting request for (<asriva22><lgposputb500910.ghp.bcp.com><CN=lgposputb50010.ghp.aexp.com, OU=Middleware Utilities, O=ABC  Company, L=Phoenix, ST=Arizona   2021-02-12 09:22:38,689 INFO [ Web -4099] o.a.n.w.s.AuthenticationFilter Authentication success for smennen   2021-02-12 08:45:05,277 INFO [Web -3253] o.a.n.w.s.AuthenticationFilter Attempting request for (<JWT token>) GET https://ebac/api/flow/controller/bulletins I want to extract the highlighted field as Request User. Can someone guide me How can I do that. Thanks in advance
Are Smartstore buckets uploaded to S3 immutable?  We've been using Smartstore for almost a year and I have never seen an update to a bucket after its original upload to S3.  Can anyone confirm this?
Hi all, please can you help to solve this error by modifying rex line. Here is my error: Error in 'rex' command: regex="[^,]+\:\s(?<Result>[^,]+)\,[^,]+\:\s(?<CardTyp>[^,]+|)\,[^,]+\:\s(?<TxTyp>[^,... See more...
Hi all, please can you help to solve this error by modifying rex line. Here is my error: Error in 'rex' command: regex="[^,]+\:\s(?<Result>[^,]+)\,[^,]+\:\s(?<CardTyp>[^,]+|)\,[^,]+\:\s(?<TxTyp>[^,]+)\,[^,]+\:\s(?<Amount>[^,]+|)\,[^,]+\:\s(?<CardTech>[^,]+|)\,[^,]+\:\s(?<TerminalId>[^,]+|)\,[^,]+\:\s(?<TxDtTm>[^,]+|)\,[^,]+\:\s(?<AquirNm>[^,]+|)\,[^,]+\:\s(?<CardNu>[^,]+|)\,[^,]+\:\s(?<Merchant>[^,]+|)\,[^,]+\:\s(?<ExtraData>\[.*?\]|)\,[^,]+\:\s(?<ErrorMsg>[^,]+|)" has exceeded configured match_limit, consider raising the value in limits.conf Thank you  
Can someone help here please. I'm trying to remove the header which is currently adding as header as a events in the parsing which needs to remove.  Also time stamp is not correct. Below is config f... See more...
Can someone help here please. I'm trying to remove the header which is currently adding as header as a events in the parsing which needs to remove.  Also time stamp is not correct. Below is config from props.conf KV_MODE = auto SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true DATETIME_CONFIG = NONE CHARSET=UTF-8 INDEXED_EXTRACTIONS=CSV HEADER_FIELD_LINE_NUMBER=1 TIMESTAMP_FIELDS=Date,Time LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false disabled = false pulldown_type = 1 TIMESTAMP_FIELDS = Date,Time FIELD_DELIMITER = , FIELD_QUOTE = " CHECK_FOR_HEADER = true @splunk @Anonymous 
Hello Splunkers - Looking for Dashboard skins/template for application and infra metrics in splunk enterprise7.3.x version, is there any built in dashboard or sample templates where i can configure a... See more...
Hello Splunkers - Looking for Dashboard skins/template for application and infra metrics in splunk enterprise7.3.x version, is there any built in dashboard or sample templates where i can configure and customize. Thanks a Lot on your inputs in advance. BharaniK
How to configure DB connect app on Splunk Cloud to talk to DB2 server running locally. we don't have Splunk enterprise or Heavy forwarders. Please advise 
How can i retrieve the SID of a saved search by curl?
Hi,  I have security essentials configured but I want to indicate that some content is configured in other product that i don't have data in splunk . but i want the mitre attack heat map show it as ... See more...
Hi,  I have security essentials configured but I want to indicate that some content is configured in other product that i don't have data in splunk . but i want the mitre attack heat map show it as implemented. have you any idea how to do it thank you
Hello,  I get the following error when I start the machine agent ver 21.1.0 in CentOS version 7 . Using java executable at /opt/appdynamics/appd_agents/machine-agent/jre/bin/java Using Java Versi... See more...
Hello,  I get the following error when I start the machine agent ver 21.1.0 in CentOS version 7 . Using java executable at /opt/appdynamics/appd_agents/machine-agent/jre/bin/java Using Java Version [11.0.9] for Agent Using Agent Version [Machine Agent v21.1.0-3041 GA compatible with 4.4.1.0 Build Date 2021-01-22 00:14:22] ERROR StatusLogger Reconfiguration failed: No configuration found for '2a17b7b6' at 'null' in 'null' [INFO] Agent logging directory set to: [/opt/appdynamics/appd_agents/machine-agent/logs] WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/opt/appdynamics/appd_agents/machine-agent/lib/guice-4.2.3.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1 WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release please help resolve. thanks Venkat
I might be confusing myself by making this harder than it is... Say I have a log where the events are: LOGIN ACTION (1) ACTION (2) LOGOUT LOGIN ACTION (3) ACTION (4) ACTION (5) LOGOUT What... See more...
I might be confusing myself by making this harder than it is... Say I have a log where the events are: LOGIN ACTION (1) ACTION (2) LOGOUT LOGIN ACTION (3) ACTION (4) ACTION (5) LOGOUT What I would like is to be able to display all the ACTION events that happened between just the first LOGIN/LOGOUT pair and output: ACTION (1) ACTION (2) This is in a dashboard, and I've got dropdowns to identify each unique LOGIN event, those are working just fine. I tried a transaction, but I think that might be the wrong tool for the job, and I'm worried I got too fixated on that and am now missing the forest for the trees. What I want is all ACTION events bounded by the selected LOGIN and the next subsequent LOGOUT. So in terms of metacode I want something along the lines of... | search ACTION earliest=LOGIN._time latest=LOGOUT._time Does that make sense? Am I approaching this from the wrong direction? Or is this just a bit of search code I haven't figured out?
Hi Splunkers! First time posting here, but I could really need some help. I've been meddling with Splunk for a while, and I got the gist of it. However, I've been having a bad time with this parti... See more...
Hi Splunkers! First time posting here, but I could really need some help. I've been meddling with Splunk for a while, and I got the gist of it. However, I've been having a bad time with this particular search condition. If you're familiar with Service Now, it creates event logs for every state change or update a ticket receives, so you have more than one log per INC (field name -> "number"). The following search allows me to see the latest "ticket" regardless of its dv_state. However, I want to "hide" the ones that are "Closed" or "Resolved". (Note: I've redacted some of the values as I consider them to be sensitive information). It's worth mentioning that the field "active" could be useful (values= "true" or "false"), but even if I put active="true", it will also show the dv_state in which this field was true (even though the latest state is "Closed"). splunk_server_group=oi source="[redacted]" sourcetype="snow:incident" number="*" short_description="[redacted]*" dv_state="*" AND dv_opened_by= "Oscar Pavon" OR "Helena Taribo" OR "Ronald Guevara" OR "Andres Penagos" OR "Matias Alcorta" OR "Agustin Gonzalez" OR "Abigail Soto" OR "Matias Alcorta" OR "Luis Huenuman"AND sys_created_by NOT "rsa.archer" NOT "Support" | table number severity opened_at sys_updated_on dv_state dv_opened_by short_description dv_assignment_group | sort -opened_at | dedup number | rename number as "INC Number", severity as "Severity", opened_at as "First Opened", sys_updated_on as "Latest Update", dv_opened_by as "Opened by", dv_assignment_group as "Assingment Group", dv_state as "Status", short_description as "Short Description" INC Number Severity First Opened Latest Update Status Opened By Short Description Assignment Group INC1075596 3 2021-02-11 19:34:48 2021-02-11 19:56:17 New Agustin Gonzalez [redacted] [redacted] NC1071433 3 2021-02-08 14:52:55 2021-02-08 16:36:53 Resolved Abigail Soto [redacted] [redacted] ... ... ... ... ... ... ... ...   Thanks!!
While trying to get the data from UF to indexer, the header is getting indexed as well. Attached the log file and the input.conf and props.conf configured currently   props.conf: [linux_secure] H... See more...
While trying to get the data from UF to indexer, the header is getting indexed as well. Attached the log file and the input.conf and props.conf configured currently   props.conf: [linux_secure] HEADER_FIELD_LINE_NUMBER = 1 FIELD_DELIMITER = ~^~ EVENT_BREAKER = "([\r\n]+)"   inputs.conf [monitor:///tmp/Patch/*.log] disabled = 0 index = main sourcetype = linux_secure crcSalt = <SOURCE>   log below: Application~^~Server~^~Pre_Patching_Status~^~Patching_Status~^~Post_Patching_Status~^~Overall_Status SAP GTS~^~sppgtslapew01~^~Success~^~Success~^~Success~^~Success SAP GTS~^~sppgtslapew02~^~Success~^~Success~^~Success~^~Success SAP GTS~^~sppgtsldbew01~^~Success~^~Failed~^~Skipped~^~Failed     Thank you.
Hi, I can't seem to work out how to do this. I've looked in the documentation but can't find an example. I am trying to set up date/time recognition for a log file that has the date only on the first... See more...
Hi, I can't seem to work out how to do this. I've looked in the documentation but can't find an example. I am trying to set up date/time recognition for a log file that has the date only on the first line of the log file and then every entry thereafter has the time. Here is an example: Logfile name xxxxx Current Day: 01/30/2021 (13:11:06.696)(07059)ABCDEF_01: TX (000)162,47773,455,0538,126,00152,00174|00000 (13:11:07.324)(07060)ABCDEF_01: RX (000)162,47773,455,0538,126,00152,00174|00000  How do I define the extraction so every event has the date 01/30/2021 and then the time of the event is taken from every line as H:%M:%S.%3N %Z
Hi, I´m new with Splunk and i´m trying to do to enable a tag on splunk recolection to know from which heavy/indexer is coming each log source but i don´t know how to do or approach. Thank you for y... See more...
Hi, I´m new with Splunk and i´m trying to do to enable a tag on splunk recolection to know from which heavy/indexer is coming each log source but i don´t know how to do or approach. Thank you for your help. Regards.
Hello everybody, I need to ingest into Splunk a CSV file containing an inventory of mobile devices. The HF that monitors such directory is a Red Hat 8 server with Splunk 8.1.0 installed. Since the ... See more...
Hello everybody, I need to ingest into Splunk a CSV file containing an inventory of mobile devices. The HF that monitors such directory is a Red Hat 8 server with Splunk 8.1.0 installed. Since the file is about an inventory, the CSV file doesn't change frequently, and Splunk complains because the file is the same of the first indexed:   02-12-2021 03:00:04.466 +0100 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/path/to/file/filename_YYYY_mm_DD_HH_MM_SS_MILLIS.csv). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.   As suggested in other posts, I added the following line in inputs.conf:   crcSalt = <SOURCE>   The CSV file is downloaded once a day using a REST call, and the file name has the timestamp appended at the end of the name, so I expected this option would help me to overcome the issue. But, despite I set the crcSalt, Splunk is keeping on complaining, skipping the file and giving me the same message as above. Any idea about this issue? Am I doing anything wrong? Thanks in advance