All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to set timestamp for the event : ======== Sat Mar 19 16:33:08 2022 -05:00 LENGTH : '228' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' ========= The rules I used are: TIME... See more...
I am trying to set timestamp for the event : ======== Sat Mar 19 16:33:08 2022 -05:00 LENGTH : '228' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' ========= The rules I used are: TIME_FORMAT = %a %b %d %H:%M:%S %Y %:z TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 32 It is catching the timestamp correctly. However showing the error "could not use strptime to parse timestamp from LENGTH : '228' " I am not sure how to resolve the error.
Hello, I have a text source file with header. Some sample events (first line is a header) and props that I wrote given below. My props is working ok, except it breaks the events at TEST\2qw123|Empl... See more...
Hello, I have a text source file with header. Some sample events (first line is a header) and props that I wrote given below. My props is working ok, except it breaks the events at TEST\2qw123|Employee, TEST\3eraa2|Employee, TEST\87xaqw|Employee, at Obj.BasePage.Page, TEST\m69xcb, at Obj.BasePage.Page, and TEST\7yxccd|Employee  instead of breaking events at  TEST\2qw123|Employee, TEST\3eraa2|Employee, TEST\87xaqw|Employee, TEST\m69xcb, and TEST\7yxccd|Employee .  So from following sample events, I should have 5 events , but getting 7 events. Any help will be highly appreciated. Thank you.  UserID|UserType|System|EventType|EventID|Subject|SessionID|SrcAddr|EventStatus|TimeStamp|AdditionalData|DeviceID|DestSrcAddr TEST\2qw123|Employee|COM|TESTUSER|NTINCheckKCase|089524234|ybzjlie3d4ayr1i2|10.212.48.121|00|20220217122935|Case Information request: (Case-170) - 201612-30|mct0ma01ma4352855|10.219.174.222 TEST\3eraa2|Employee|COM|TESTUSER|NTINCheckKCase|046453942|ybzjlie3d4ayr1i2|10.212.48.121|00|20220217123142|Case Information request: (Case -85) - 201912-30|mct0ma01ma4352855|10.219.174.222 TEST\87xaqw|Employee|COM|SYSTEM|SystemMsg||zsod0mvomcelp3hvln5smm1u|10.216.22.17|01|20220217124743|Type:'error'; Ref:'Case/CaseInventory.aspx?Query=true&Scope=ServiceWide'; Msg: experienced <br>Source: App_Web_pc<br>Message: Object reference not set to an instance of an object.<br> /Case/CaseInventory.aspx<br>Trace: at Case.CaseInventory() at Obj.BasePage.Page_Load(Object sender, EventArgs e)<br><br>Please try to login again.|mct0ma01ma4382154|10.210.174.221 TEST\m69xcb|Employee|COM|SYSTEM|SystemMsg||z0ae3c25zggbzx5p|10.215.173.231|01|20220217130933|Type:'error'; Ref:'Case/CaseInventory.aspx?Query=true&Scope=ServiceWide'; Msg: experienced a error:<br><br>Source: App_Web_pcf3kniw<br>Message: Object reference not set to an instance of an object.<br> /Case/CaseInventory.aspx<br>Trace: at Case.CaseInventory.page_load3() at Obj.BasePage.Page_Load(Object sender, EventArgs e)<br><br>Please try to login again.|mct0ma01ma4353159|10.210.174.221 TEST\7yxccd|Employee|COM|TESTUSER|NTINCheckKCase|008422123|zggbzx5pzgnw1nih|10.215.173.231|00|20220217131108|Case Information request: (Case -24) - 202112-30|mct0ma1ma4353159|10.210.174.221   [sourcename] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) INDEXED_EXTRACTIONS=psv MAX_TIMESTAMP_LOOKAHEAD=14 HEADER_FIELD_LINE_NUMBER=1 TIME_FORMAT=%Y%m%d%H%M%S TIMESTAMP_FIELDS=TimeStamp TRUNCATE=2000
Hi Splunk Members,  I am relatively new to splunk and I wanted to ask a very basic question. I would like to find out  "How  can I change the Y-axis display data labels on a scatter plot without co... See more...
Hi Splunk Members,  I am relatively new to splunk and I wanted to ask a very basic question. I would like to find out  "How  can I change the Y-axis display data labels on a scatter plot without comprising the 1 to 3 scale ? from:  1 to "low" 2 to "medium" 3  to "high" | eval Alert_Level = case(Alert_Level = "Very Poor", 1, Alert_Level = "Poor", 2, Alert_Level = "Fair" , 3 ) | table Xaxis Yaxis  Alert_Level    This is incorrect as this plot thinks it is a string rather then number. | eval Alert_Level2 = case(Alert_Level = 1, "low", Alert_Level = 2, "medium", Alert_Level = 3 , "high" ) | table Xaxis Yaxis  Alert_Level2 or else i was trying the below but not sure if i am doing it in a best way HTML Source: <option name="charting.chart.showDataLabels">high</option> <option name="charting.axisTitleY.visibility">1:"Low", 2:"Medium", 3:"High"</option> Your help would be much appreciated. Thanks
I have the following data :    Service Message Service1 Hello world Service2 Another message Service1 Hello world Service1 Some other message   How can I write a query such... See more...
I have the following data :    Service Message Service1 Hello world Service2 Another message Service1 Hello world Service1 Some other message   How can I write a query such that the final output looks like :  Service : Unique message : count For example :  Service1 : Hello World :   2 Service1:Some other message : 1 Service2: Some other message
I have the following data :    Service Message Service1 Hello world Service2 Another message Service1 Hello world Service1 Some other message   How can I write a report suc... See more...
I have the following data :    Service Message Service1 Hello world Service2 Another message Service1 Hello world Service1 Some other message   How can I write a report such that the final output looks like :  Service Message : count   For example, with the data above, it should be :  Service1 Hello world : 2 Some other message : 1 Service 2 Another message : 1
Considering a field like :  field=select id from table where id In ["123","12"] limit 1 field=select id from table where id="123" limit 1   How can I write a query so ... See more...
Considering a field like :  field=select id from table where id In ["123","12"] limit 1 field=select id from table where id="123" limit 1   How can I write a query so that all the values in quotes are replaced by a placeholder? For example, an ideal output would be :  field=select id from table where id In ["xxx","xx"] limit  field=select id from table where id="xxx" limit 1   The values within quotes are alphanumeric
Hello everybody, This is actually my first post here so forgive me if I missed up or posted in the wrong section. I'm trying to compare/corelate two fields values from different source types and sa... See more...
Hello everybody, This is actually my first post here so forgive me if I missed up or posted in the wrong section. I'm trying to compare/corelate two fields values from different source types and same index. Please find two sample of event I'm trying to work on. 1) sample of the first source type index=wineventlog sourcetype=Script:ListeningPorts host=computer1 dest=172.*.*.* dest_port=50000 process_id=151111   2) sample of the second source type index=wineventlog sourcetype=WinHostMon source=process host=computer1 Path=***.exe Process=**.exe ProcessId=151111    I'm trying to corelate process_id and ProcessId fields to get the process field and make a count table. Sample output: process_id   |  dest_port  |  count   |  host               |  Path      |  process 151111               50000              1          Computer1      **.exe      **.exe   I tried this query but it didn't give me the right result index=wineventlog sourcetype=Script:ListeningPorts dest="172.*.*.*" host="Computer1" | table host process_id dest, dest_port | rename process_id as ProcessId | join type=inner host ProcessId [search index=wineventlog sourcetype=WinHostMon | table  ProcessId dest_port host Path process]    
I have been using dark theme in dashboards. Is it possible to have dark theme in embedded reports?
These are ticket platform logs with field 'lastupdated' which contains time and date [2022-04-12 12:12:17.160000+00:00] . Trying to build a weekly chart where only results that contains "lastupdated... See more...
These are ticket platform logs with field 'lastupdated' which contains time and date [2022-04-12 12:12:17.160000+00:00] . Trying to build a weekly chart where only results that contains "lastupdated" after present weeks monday should be displayed.  Something like  if ( lastupdated > monday)
hi, I have 2 source A and B (routers), they are sending the data over udp port 514. all of the sudden, the source B is not indexed anymore. I have captured the traffic (tcpdump), I can see clea... See more...
hi, I have 2 source A and B (routers), they are sending the data over udp port 514. all of the sudden, the source B is not indexed anymore. I have captured the traffic (tcpdump), I can see clearly that the traffic is reaching the Splunk server. My Splunk deployment is a free license all-in-one server. any thoughts ?   thanks.   heloma
Hi i can’t download the program it just keep downloading the page without getting me in after about 15 minutes  show me “ bad getaway”    what should I do?
I have a threat activity rule that looks at both internal IPs attempting communication externally to malicious IPs based on Threat Intelligence lookups/feed and vice-versa. However I would like to tu... See more...
I have a threat activity rule that looks at both internal IPs attempting communication externally to malicious IPs based on Threat Intelligence lookups/feed and vice-versa. However I would like to tune my search to filter out the events that have been blocked by the firewall or proxy and alert on the true positive and I'm not sure if I should tune the search itself or modify the Threat Intelligence data model.  Has anyone done or come across this before? Please help
Hello Splunkers,   I have a query where I did a  |stats values(abc) as abc command over time .I got the below results . I want the abc column to omit the consecutive same results .I basically want ... See more...
Hello Splunkers,   I have a query where I did a  |stats values(abc) as abc command over time .I got the below results . I want the abc column to omit the consecutive same results .I basically want to compare the results with the above one and show only which are different and count of it is different. You can see below 2022-04-07 12:41:17  and 2022-04-04 10:16:34 have the same "abc " value .I want to omit the second one and then compare with the 3rd result which is different so keep that and then compare the 3rd value with 4th and continue that way..Also show the one which have less count like 2022-03-07 11:48:46 .I tried dedup but it did not work..Any suggestions?   2022-04-07 12:41:17 1334821020002 1334821020007 1334821020011 1334821020024 1334821020027 1334821020043 1334821020053 1334821020075 2022-04-04 10:16:34 1334821020002 1334821020007 1334821020011 1334821020024 1334821020027 1334821020043 1334821020053 1334821020075 2022-03-22 07:52:24 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-22 07:36:36 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-18 06:31:18 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-14 13:11:15 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-09 06:42:36 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-07 11:48:46 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848   Thanks in Advance
Greetings Splunk Community, I am currently working on a search and I am trying to drop rows that have "NULL" in them. The problem I am running into is that some of my rows with "NULL" have things l... See more...
Greetings Splunk Community, I am currently working on a search and I am trying to drop rows that have "NULL" in them. The problem I am running into is that some of my rows with "NULL" have things like "nullnullNULL" or "nullNULL".  Is there a way i can remove the any row that has the "NULL" value regardless of other info in it? Thanks in advance!
All, I'm using the SaaS controller.  I'm familiar with metric rollup but there is a difference in data granularity between the SaaS UI and the output from the metric-data REST endpoint. Currently I... See more...
All, I'm using the SaaS controller.  I'm familiar with metric rollup but there is a difference in data granularity between the SaaS UI and the output from the metric-data REST endpoint. Currently I'm looking at some data in the UI and it shows 1 minute intervals for a week ago.  However when I query the same approximate time range from the REST API, it's giving me 1 hour intervals.  Rollup is false.  I've tried adjusting the start time of my REST query to be < 1 week ago.  The time range of my query is 2 hours. Through trial and error, it looks like the API returns 1 hour granularity for times more than about 24 hours ago.  Somewhere between 12-24 hours ago, I get 10 minute granularity and 1 minute for less than that. Is there a way I can get the 1 minute granularity from REST.  Obviously the data exists if the UI is showing it to me. thanks
Hello, since 2018 our application has been logging to Azure Storage, in a single container, with "folders" broken down as: /Year2018/Month04/Day12/Hour15/Minute20/Subscription123/User/456/logtype.l... See more...
Hello, since 2018 our application has been logging to Azure Storage, in a single container, with "folders" broken down as: /Year2018/Month04/Day12/Hour15/Minute20/Subscription123/User/456/logtype.log   My goal is to pull these logs (json) into Splunk, and so I've set up the Add-On and begun ingesting data... but it kept stopping at 2018... never getting to 20(19/20/21/22).  Investigating why, after quite a bit of tinkering around, I found some internal logs that indicated  The number of blobs in container [containername] is 5000  Which... upon further research... is the maximum number of records returned without hitting a movenext marker because of forced paging with the API. So... I mean I can go edit the python script myself... but is there another way/better way to do this or is a fix for this already in the works?  And, if not and I make the change... is there a github or something I can submit the change to?
I have a dashboard setup that returns a few searches for my organization. When I click the export button underneath the search the export results box pops up. When I click export it opens up my file ... See more...
I have a dashboard setup that returns a few searches for my organization. When I click the export button underneath the search the export results box pops up. When I click export it opens up my file explorer to the last location I was at. However, I also have splunk setup on another network that always exports automatically to my downloads folder. So I wondering how to get my splunk to open my last file explorer when I go to export some results.
Does anyone know the list of messages and what they mean when running ./splunk check-integrity -bucketPath [ bucket path ] [ -verbose ] I searched all of Splunk, opened a ticket, spoke to professio... See more...
Does anyone know the list of messages and what they mean when running ./splunk check-integrity -bucketPath [ bucket path ] [ -verbose ] I searched all of Splunk, opened a ticket, spoke to professional services with no luck. Here are some examples of the messages: Hash corresponding to slice... Slices hash file (l1Hashes... Hash of l1Hashes... Journal has no hashes Hash files missing in the bucket  
Hello, Whats the major difference between splunk 8.2.4 and splunk 8.2.6?
We're running into an issue using Add-On for AWS + SQS-based S3 inputs to pull Aurora logs from S3 buckets. The .gz data in the buckets appears "double zipped" so even after Splunk extracts and index... See more...
We're running into an issue using Add-On for AWS + SQS-based S3 inputs to pull Aurora logs from S3 buckets. The .gz data in the buckets appears "double zipped" so even after Splunk extracts and indexes the data our search results are still zipped & look like this: ��G�!x��#�V�~������ ��D����;~-lǯ=�������D� AWS admins confirmed logs are zipped just once by Kinesis en route to S3 buckets (not a .zip containing groups of .gz), but even extracting data directly from journal.gz on indexers doesn't reveal plaintext logs. Any potential solutions/fixes for this?