All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, How we can open a mail one hyperlink in splunk. Below is my text: <html depends="$show_html$"> <div style="font-weight:bold;font-size:200%;text-align:center;color:red"> If you consi... See more...
Hi Everyone, How we can open a mail one hyperlink in splunk. Below is my text: <html depends="$show_html$"> <div style="font-weight:bold;font-size:200%;text-align:center;color:red"> If you considered it as Error. Please Contact O2-Site-Reliability-Engineering Team. </div> </html> I want that O2-SIte-Reliability Engineering team should be a hyperlink and on clicking on this.It should open a mail. Can someone guide me on this.
Example: My search is  index=* source=*xyz* I am getting an event with plenty of lines in string format I want to display 2 fields out of it as the example below: [2021--2-12][info][xyz][empid=1... See more...
Example: My search is  index=* source=*xyz* I am getting an event with plenty of lines in string format I want to display 2 fields out of it as the example below: [2021--2-12][info][xyz][empid=12345][cliner=123]....and so on... and empuniqid=123%abcd ..and so on... From this I want display 2 fields empid and empuniqid   Any suggestions? Thanks        
I'm currently trying to find workstations that haven't been logged into by a human over a period of time. My first query is  index=windows host=ch0cswop002 | stats values(ComputerName) as computer... See more...
I'm currently trying to find workstations that haven't been logged into by a human over a period of time. My first query is  index=windows host=ch0cswop002 | stats values(ComputerName) as computers That correctly gets me a list of all the windows workstations that are sending us logs. My second query search index=windows host=ch0cswop002 EventCode=4624 Logon_Type=10 OR Logon_Type=2 user!=DWM-* user!=UMFD-* | stats count by ComputerName correctly gives me the number of times any human logged into a specific ComputerName When I combine the two   index=windows host=ch0cswop002 | stats values(ComputerName) as ComputerName | eval count = 0 | join type=outer ComputerName [search index=windows host=ch0cswop002 EventCode=4624 Logon_Type=10 OR Logon_Type=2 user!=DWM-* user!=UMFD-* | stats count by ComputerName] | where count == 0 | table ComputerName   I get a list of all the computers.  If I table ComputerName,count I see only a single count value of zero for the first result. Does my error jump out for anyone or know a better way of doing this?
What's a scalable to extract key-value pairs where the value matches via exact or substring match but the field is not known ahead of time, and could be in _raw only? Eg, search for the string "alan... See more...
What's a scalable to extract key-value pairs where the value matches via exact or substring match but the field is not known ahead of time, and could be in _raw only? Eg, search for the string "alan", which may be associated to fields as follows: index=indexA  user=alan index=indexB username=alan index=indexC loginId=corp\alan index=indexD   _raw=<unstructured text, alan : user>  
I'm using the Splunk Add-on for Microsoft Windows to parse logs from a couple Windows 2019 DNS servers. Things seemed to be working OK, but we noticed some weird behavior with the src_domain field wh... See more...
I'm using the Splunk Add-on for Microsoft Windows to parse logs from a couple Windows 2019 DNS servers. Things seemed to be working OK, but we noticed some weird behavior with the src_domain field whenever the domain being resolved contained a dash. The domain name would truncate there, so instead of  (31)maya-apiserver-867994d7dd-tthh9(0) -> maya-apiserver-867994d7dd-tthh9 we would instead get: (31)maya-apiserver-867994d7dd-tthh9(0) -> maya This has caused some issues with our ESXi hosts, as they're all named something like vhost-01 which all got truncated down to the same vhost. We traced this to a regex being used in the Windows TA that looks wrong: REGEX = (\(\d\)*[\w+\(\d\)]{1,}) This is trying to match a domain name like: (9)pod01-id1(4)eus2(6)backup(12)windowsazure(3)com(0) I have to admit that I'm fairly confused by the syntax since it seems like an inappropriate use of a character class, but from what I can see the use of w is restricting domain characters to word character, which do not include the dash. In addition it's allowing underscores, which aren't valid in domains. We replaced it with this regex and and now getting much better results: REGEX = (\(\d+\)(?:[a-zA-Z0-9\-]+\(\d+\)){1,}) Is there any way this change can be evaluated for the next rev of the Windows TA?
Looking at the example field below (part of a JSON event), I'm trying to figure out how at search time to pair up the corresponding values of properties.appliedConditionalAccessPolicies{}.displayName... See more...
Looking at the example field below (part of a JSON event), I'm trying to figure out how at search time to pair up the corresponding values of properties.appliedConditionalAccessPolicies{}.displayName fields and properties.appliedConditionalAccessPolicies{}.result fields into new field/value pairs for each event (note that there can be multiple pairs per event - two in this example). So for example, in the event below, I would want to add two new field/value pairs to the event: Require_Duo_MFA=success Scammer_Blocked_IP_Addresses=notApplied Any ideas how to approach that?   appliedConditionalAccessPolicies: [ [-] { [-] conditionsNotSatisfied: 0 conditionsSatisfied: 3 displayName: Require Duo MFA enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: 11111111-1111-1111-1111-111111111111 result: success } { [-] conditionsNotSatisfied: 8 conditionsSatisfied: 3 displayName: Scammer Blocked IP Addresses enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: 22222222-2222-2222-2222-222222222222 result: notApplied } ]  
Desired Outcome : I am trying to create a simple timechart  to show a count of ports and the relative time line on the x-axis. Sounds simple. The problem: It appears timecharts (for whatever reason)... See more...
Desired Outcome : I am trying to create a simple timechart  to show a count of ports and the relative time line on the x-axis. Sounds simple. The problem: It appears timecharts (for whatever reason) use the first row of the results for the x-axis values.  Whenever I run a query, the _time columns is the first column shown while executing the query, but is always moved to the last column when the query is finished. The result is the timechart uses whatever column value is sees in the first column as the x-values. If that' s by design, its rather awful as I see no way to hard code the x-axis values to another column. I've tried a bunch of options to try and move the column back to the first column, but they all fail. Here is one such option I tried. This of course does the same thing. Works great until the final results are rendered and then the _time column again moves to the end and fouls up the x-axis values. I've also tried renaming _time to a_ or 0_ to try to force the order, but no dice. My search index blah blah.. | eval Date=strftime(_time,"%y-%m-%d %H:%M:%S") | timechart span=5m count by port useother=true limit=15 | table _time * | fields keepcolorder=t "_time" "*" I've attached images "executing" and "execution complete" to show the difference. of   
I have stored data in 2 indexes. One Index has a attribute which can be a substring of the second index _raw event data format. I want to generate a List for every substring that was found inside tha... See more...
I have stored data in 2 indexes. One Index has a attribute which can be a substring of the second index _raw event data format. I want to generate a List for every substring that was found inside that row event. Any ideas how i can accomplish this? Thank you. I tried something like:      index="index2" | rename _raw as raw | map search="search index=\"index1\" | where like($raw$,\"%\".field1.\"%\")"     For some reason there is no field "result" in my output.
Hi All, I'm new to Splunk and want to execute a splunk query without using CLI or GUI. Options like ETL tool or a shell script that can be executed outside the splunk is my objective with the outpu... See more...
Hi All, I'm new to Splunk and want to execute a splunk query without using CLI or GUI. Options like ETL tool or a shell script that can be executed outside the splunk is my objective with the output being loaded to snowflake database. Are there any workarounds or best approach for this use case.? Could someone please guide me.  
Hi All, I have set up a continuous monitor of the /var/log directory and set the host to "vps" Now when I am searching the oslogs index, I'm expecting all the logs to have the host value to b... See more...
Hi All, I have set up a continuous monitor of the /var/log directory and set the host to "vps" Now when I am searching the oslogs index, I'm expecting all the logs to have the host value to be set as "vps" but that is not the case, instead of having one host "vps" I have two hosts "vps" and "ip-172-31-17-23" which is the hostname of my machine.  There is no additional input for the oslogs index, this is the only one that I've set.     What explains this anomaly and how can this be rectified ? Thanks.
Hi all. This rule has been driving me crazy for a while now, and the teams working on it too. Just looking for a way to track standard commands for user and/or group modifications. I tried using t... See more...
Hi all. This rule has been driving me crazy for a while now, and the teams working on it too. Just looking for a way to track standard commands for user and/or group modifications. I tried using the built in syslog keys [index="*" (sourcetype="syslog" (key=user_modification OR key=group_modification OR key=etcgroup OR key=etcpasswd)] and while this works, it throws up so many false positives such as users whose password is changed by CyberArc.  Has anyone had a similar issue and how did you best fix/monitor it?
Hi everyone. I have some hierarchical XML that can get rather large. If you're familiar with SQL - I'm visualizing Execution Plans. I have data that looks like this:     <PlanXml> <CursorInformati... See more...
Hi everyone. I have some hierarchical XML that can get rather large. If you're familiar with SQL - I'm visualizing Execution Plans. I have data that looks like this:     <PlanXml> <CursorInformation> <SQL_ID>00j24vf2v5uay</SQL_ID> <ADDRESS>00000000B9AF2A00</ADDRESS> <HASH_VALUE>2243094878</HASH_VALUE> <PLAN_HASH_VALUE>3599761032</PLAN_HASH_VALUE> <CHILD_ADDRESS>00000000B9AF12A8</CHILD_ADDRESS> <CHILD_NUMBER>0</CHILD_NUMBER> <TIMESTAMP>2021-04-15 07:19:52</TIMESTAMP> </CursorInformation><CursorDetails> <ID>0</ID> <PARENT_ID></PARENT_ID> <DEPTH>0</DEPTH> <POSITION>1548</POSITION> <OPERATION>SELECT STATEMENT</OPERATION> <OPTIONS></OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER></OBJECT_NUMBER> <OBJECT_OWNER></OBJECT_OWNER> <OBJECT_NAME></OBJECT_NAME> <OBJECT_ALIAS></OBJECT_ALIAS> <OBJECT_TYPE></OBJECT_TYPE> <OPTIMIZER>ALL_ROWS</OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST>1548</COST> <CARDINALITY></CARDINALITY> <BYTES></BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST></CPU_COST> <IO_COST></IO_COST> <TEMP_SPACE></TEMP_SPACE> <ACCESS_PREDICATES></ACCESS_PREDICATES> <FILTER_PREDICATES></FILTER_PREDICATES> <PROJECTION></PROJECTION> <TIME></TIME> <QBLOCK_NAME></QBLOCK_NAME> </CursorDetails><CursorDetails> <ID>1</ID> <PARENT_ID>0</PARENT_ID> <DEPTH>1</DEPTH> <POSITION>1</POSITION> <OPERATION>SORT</OPERATION> <OPTIONS>AGGREGATE</OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER></OBJECT_NUMBER> <OBJECT_OWNER></OBJECT_OWNER> <OBJECT_NAME></OBJECT_NAME> <OBJECT_ALIAS></OBJECT_ALIAS> <OBJECT_TYPE></OBJECT_TYPE> <OPTIMIZER></OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST></COST> <CARDINALITY>1</CARDINALITY> <BYTES>13</BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST></CPU_COST> <IO_COST></IO_COST> <TEMP_SPACE></TEMP_SPACE> <ACCESS_PREDICATES></ACCESS_PREDICATES> <FILTER_PREDICATES></FILTER_PREDICATES> <PROJECTION><![CDATA[(#keys=0) SUM("UI_COUNT")~22~]]></PROJECTION> <TIME></TIME> <QBLOCK_NAME>SEL$1</QBLOCK_NAME> </CursorDetails><CursorDetails> <ID>2</ID> <PARENT_ID>1</PARENT_ID> <DEPTH>2</DEPTH> <POSITION>1</POSITION> <OPERATION>VIEW</OPERATION> <OPTIONS></OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER></OBJECT_NUMBER> <OBJECT_OWNER></OBJECT_OWNER> <OBJECT_NAME></OBJECT_NAME> <OBJECT_ALIAS>&quot;from$_subquery$_001&quot;@&quot;SEL$1&quot;</OBJECT_ALIAS> <OBJECT_TYPE></OBJECT_TYPE> <OPTIMIZER></OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST>1548</COST> <CARDINALITY>1</CARDINALITY> <BYTES>13</BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST>327951516</CPU_COST> <IO_COST>1541</IO_COST> <TEMP_SPACE></TEMP_SPACE> <ACCESS_PREDICATES></ACCESS_PREDICATES> <FILTER_PREDICATES></FILTER_PREDICATES> <PROJECTION><![CDATA["STUFF"~NUMBER,22~]]></PROJECTION> <TIME>1</TIME> <QBLOCK_NAME>SEL$54D64B3C</QBLOCK_NAME> </CursorDetails><CursorDetails> <ID>3</ID> <PARENT_ID>2</PARENT_ID> <DEPTH>3</DEPTH> <POSITION>1</POSITION> <OPERATION>SORT</OPERATION> <OPTIONS>AGGREGATE</OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER></OBJECT_NUMBER> <OBJECT_OWNER></OBJECT_OWNER> <OBJECT_NAME></OBJECT_NAME> <OBJECT_ALIAS></OBJECT_ALIAS> <OBJECT_TYPE></OBJECT_TYPE> <OPTIMIZER></OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST></COST> <CARDINALITY>1</CARDINALITY> <BYTES>37</BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST></CPU_COST> <IO_COST></IO_COST> <TEMP_SPACE></TEMP_SPACE> <ACCESS_PREDICATES></ACCESS_PREDICATES> <FILTER_PREDICATES></FILTER_PREDICATES> <PROJECTION><![CDATA[(#keys=0) COUNT(*)~22~]]></PROJECTION> <TIME></TIME> <QBLOCK_NAME>SEL$54D64B3C</QBLOCK_NAME> </CursorDetails><CursorDetails> <ID>4</ID> <PARENT_ID>3</PARENT_ID> <DEPTH>4</DEPTH> <POSITION>1</POSITION> <OPERATION>FILTER</OPERATION> <OPTIONS></OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER></OBJECT_NUMBER> <OBJECT_OWNER></OBJECT_OWNER> <OBJECT_NAME></OBJECT_NAME> <OBJECT_ALIAS></OBJECT_ALIAS> <OBJECT_TYPE></OBJECT_TYPE> <OPTIMIZER></OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST></COST> <CARDINALITY></CARDINALITY> <BYTES></BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST></CPU_COST> <IO_COST></IO_COST> <TEMP_SPACE></T EMP_SPACE> <ACCESS_PREDICATES></ACCESS_PREDICATES> <FILTER_PREDICATES><![CDATA[( IS NOT NULL OR IS NOT NULL OR ("MASTER"."DATA" IS NULL AND IS NOT NULL))]]></FILTER_PREDICATES> <PROJECTION></PROJECTION> <TIME></TIME> <QBLOCK_NAME></QBLOCK_NAME> </CursorDetails><CursorDetails> <ID>5</ID> <PARENT_ID>4</PARENT_ID> <DEPTH>5</DEPTH> <POSITION>1</POSITION> <OPERATION>HASH JOIN</OPERATION> <OPTIONS></OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER></OBJECT_NUMBER> <OBJECT_OWNER></OBJECT_OWNER> <OBJECT_NAME></OBJECT_NAME> <OBJECT_ALIAS></OBJECT_ALIAS> <OBJECT_TYPE></OBJECT_TYPE> <OPTIMIZER></OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST>1019</COST> <CARDINALITY>1312</CARDINALITY> <BYTES>48544</BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST>326015926</CPU_COST> <IO_COST>1012</IO_COST> <TEMP_SPACE></TEMP_SPACE> <ACCESS_PREDICATES><![CDATA["MASTER"."DATA"="TYPE_REF"."DATA"]]></ACCESS_PREDICATES> <FILTER_PREDICATES></FILTER_PREDICATES> <PROJECTION><![CDATA[(#keys=1) "MASTER"."STUFF"~VARCHAR2,10~, "MASTER"."CT_PRIMARY_KEY2"~VARCHAR2,10~, "MASTER"."CT_PRIMARY_KEY"~VARCHAR2,15~]]></PROJECTION> <TIME>1</TIME> <QBLOCK_NAME></QBLOCK_NAME> </CursorDetails><CursorDetails> <ID>6</ID> <PARENT_ID>5</PARENT_ID> <DEPTH>6</DEPTH> <POSITION>1</POSITION> <OPERATION>TABLE ACCESS</OPERATION> <OPTIONS>BY INDEX ROWID BATCHED</OPTIONS> <OBJECT_NODE></OBJECT_NODE> <OBJECT_NUMBER>81015</OBJECT_NUMBER> <OBJECT_OWNER>OWNER</OBJECT_OWNER> <OBJECT_NAME>Table</OBJECT_NAME> <OBJECT_ALIAS>&quot;TYPE_REF&quot;@&quot;SEL$2&quot;</OBJECT_ALIAS> <OBJECT_TYPE>TABLE</OBJECT_TYPE> <OPTIMIZER></OPTIMIZER> <SEARCH_COLUMNS>0</SEARCH_COLUMNS> <COST>2</COST> <CARDINALITY>15</CARDINALITY> <BYTES>150</BYTES> <OTHER_TAG></OTHER_TAG> <DISTRIBUTION></DISTRIBUTION> <CPU_COST>20393</CPU_COST> <IO_COST>2</IO_COST> <TEMP_SPACE></TEMP_SPACE> <ACCESS_PREDICATES></ACCESS_PREDICATES> <FILTER_PREDICATES></FILTER_PREDICATES> <PROJECTION><![CDATA["TYPE_REF"."TABLE"~VARCHAR2,5~]]></PROJECTION> <TIME>1</TIME> <QBLOCK_NAME>SEL$54D64B3C</QBLOCK_NAME> </CursorDetails> </PlanXml>      I want the data to come out looking tabular, like this: SQL_ID CHILD_NUMBER TIMESTAMP OPERATION OPTION ID 00j24vf2v5uay 0 2021-04-15 07:19:52 SELECT STATEMENT   0 00j24vf2v5uay 0 2021-04-15 07:19:52 SORT AGGREGATE 1 etc... etc... etc.. etc.. etc.. etc..   I can get the data to come out into a table, but each row contains a subset of the parent, and the detail values don't line up from a readable perspective: I don't know what value in one column matches to a value in an adjacent column. This is the command I was working with:     source=*PLANXMLTEST.xml | table PlanXml.CursorInformation.SQL_ID,PlanXml.CursorInformation.PLAN_HASH_VALUE,PlanXml.CursorInformation.CHILD_NUMBER,PlanXml.CursorInformation.TIMESTAMP,PlanXml.CursorDetails.ID,PlanXml.CursorDetails.OPERATION,PlanXml.CursorDetails.OPTIONS,PlanXml.CursorDetails.OBJECT_OWNER,PlanXml.CursorDetails.OBJECT_NAME,PlanXml.CursorDetails.OBJECT_TYPE,PlanXml.CursorDetails.OPTIMIZER,PlanXml.CursorDetails.COST,PlanXml.CursorDetails.CARDINALITY,PlanXml.CursorDetails.BYTES,PlanXml.CursorDetails.DISTRIBUTION,PlanXml.CursorDetails.CPU_COST,PlanXml.CursorDetails.IO_COST,PlanXml.CursorDetails.TEMP_SPACE,PlanXml.CursorDetails.ACCESS_PREDICATES,PlanXml.CursorDetails.FILTER_PREDICATES,PlanXml.CursorDetails.PROJECTION | rename PlanXml.CursorInformation.SQL_ID AS SQL_ID,PlanXml.CursorInformation.PLAN_HASH_VALUE AS PLAN_HASH_VALUE,PlanXml.CursorInformation.CHILD_NUMBER AS CHILD_NUMBER,PlanXml.CursorInformation.TIMESTAMP AS TIMESTAMP,PlanXml.CursorDetails.ID AS ID,PlanXml.CursorDetails.OPERATION as OPERATION,PlanXml.CursorDetails.OPTIONS AS OPTIONS,PlanXml.CursorDetails.OBJECT_OWNER AS OBJECT_OWNER,PlanXml.CursorDetails.OBJECT_NAME AS OBJECT_NAME,PlanXml.CursorDetails.OBJECT_TYPE AS OBJECT_TYPE,PlanXml.CursorDetails.OPTIMIZER AS OPTIMIZER,PlanXml.CursorDetails.COST AS COST,PlanXml.CursorDetails.CARDINALITY AS CARDINALITY,PlanXml.CursorDetails.BYTES AS BYTES,PlanXml.CursorDetails.DISTRIBUTION AS DISTRIBUTION,PlanXml.CursorDetails.CPU_COST AS CPU_COST,PlanXml.CursorDetails.IO_COST AS IO_COST,PlanXml.CursorDetails.TEMP_SPACE AS TEMP_SPACE,PlanXml.CursorDetails.ACCESS_PREDICATES AS ACCESS_PREDICATES,PlanXml.CursorDetails.FILTER_PREDICATES AS FILTER_PREDICATES,PlanXml.CursorDetails.PROJECTION AS PROJECTION       Let me know if I can provide any additional info - thanks for the help! 
What changes does Splunk Security Essentials make to Splunk Enterprise Security and what needs to be backed up to avoid issues when upgrading either ES or SE?
Hello, I am using the chart command in order to display data using a line chart: | chart values("torque") as variable over sort by HoleNb   But I get the following results: Therefore, visual... See more...
Hello, I am using the chart command in order to display data using a line chart: | chart values("torque") as variable over sort by HoleNb   But I get the following results: Therefore, visualisation is not initialized properly. How do I split the results in order to have something like this? sort 2941 2945 2946 2948 2950 2951 2952 2954 2955 2957 OTHER 50 2330 2180 2050 2180 2520 1740 2120 2830 2330 2050 1150 50 2890 2700 2700 3230 3290 2770 3360 2890 2520 2830 1210 50                     1280 50                     1340 50                     1400 50                     1490
Hi, Is there a way from a dashboard perspective that I present a chart from 2 big groups and if I click on the legend (or anything available). It will make a more granular look of the groups with th... See more...
Hi, Is there a way from a dashboard perspective that I present a chart from 2 big groups and if I click on the legend (or anything available). It will make a more granular look of the groups with the selected big group.  For ex. BigGroup 1 consists of 10 subgroups, and BigGroup 2 consists of 20 subgroups. On a dashboard perspective, I will present Utilization of those 2 Big groups and then if I clicked on the chart / or legend of BigGroup 1, it will present the 10 subgroups.   Thanks and Regards,
Hi, i can't do a search on Splunk where the values ​​are the result of another search. I search: index = summary | search ..... | table LINK, OLD_LINK, DATA, ID the result is: LINK   | OLD_LINK ... See more...
Hi, i can't do a search on Splunk where the values ​​are the result of another search. I search: index = summary | search ..... | table LINK, OLD_LINK, DATA, ID the result is: LINK   | OLD_LINK     | DATE                | ID 100           10                   01/02/21         1 101           11                   02/01/21         2 ......... in the same index now I want to find all those links that are in the OLD LINK field of the previous search and extract the DATA and ID for the link (OLD_LINK) that is: LINK  |    OLD_LINK     | DATE                |   ID 10                   -                  10/10/20           99 11                   -                  15/08/20          77 and at the end have a table like: LINK          | OLD_LINK           | DATE            | ID              | OLD_DATA             | OLD_ID 100                   10                      01/02/21      1                  10/10/20                     99 101                   11                      02/01/21      2                   15/08/20                    77 I tried the JOIN but it doesn't work. Can you help me? I state that I am not an ADMIN and I do not have the permissions to create a lookup table, I have to run it in a single query (I think) Tks Bye Antonio
hi the field dv_sys_created_on is a field date index="tutu" sourcetype="toto" | stats last(dv_sys_created_on) as Opened by ticket_id i tried to sort it like this but it doesnt works | eval time... See more...
hi the field dv_sys_created_on is a field date index="tutu" sourcetype="toto" | stats last(dv_sys_created_on) as Opened by ticket_id i tried to sort it like this but it doesnt works | eval time = strftime(dv_sys_created_on, "%d-%m-%y %H:%M") | sort - dv_sys_created_on could you help please??    
        index=xy device_event_class_id=Bandwidth earliest=-1d@d latest=-0d@d | rex field=msg "msg=.+raffic.+'(?<pg_name>[\w\s\-]+)'.+(?<bps>\d+\.\d+\s.+)\..+" | eval ReportKey="yersterday" ... See more...
        index=xy device_event_class_id=Bandwidth earliest=-1d@d latest=-0d@d | rex field=msg "msg=.+raffic.+'(?<pg_name>[\w\s\-]+)'.+(?<bps>\d+\.\d+\s.+)\..+" | eval ReportKey="yersterday" | timechart span=3h count by pg_name | append [search index=xy device_event_class_id=Bandwidth earliest=-2d@d latest=-1d@d | rex field=msg "msg=.+raffic.+'(?<pg_name>[\w\s\-]+)'.+(?<bps>\d+\.\d+\s.+)\..+" | eval ReportKey="beforeyesterday" | timechart span=3h count by pg_name ] | fillnull value=0 | eval mytime=strftime(_time, "%H:%M") | sort mytime         I'm trying to create a chart containing two timecharts for different time frames (today/yesterday). How can I achieve it? Currently I'm getting it one after another. I'd like basically to overlay one timechart on another one.    
I have field DivionsID with data of Exe.123, how to trim this to just 123 ?
Hi How should I set TLS settings to compatible with HF or cert for Universal Forwarder to send data to HF because my UF suppose to send data to HF in my environment but I see the below error in spln... See more...
Hi How should I set TLS settings to compatible with HF or cert for Universal Forwarder to send data to HF because my UF suppose to send data to HF in my environment but I see the below error in splnkd.log I am getting "ssl3_get_client_hello wrong version number" This UF server was down for a long time and i renamed it after power-on, so might need a new cert to do a handshake with HF but I am not getting exact steps to follow.