All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunk Community, I have a new issue concerning this config, some particular behaviour that I don't understand Here is my configuration #classic input on TCP, this is syslog logs [tcp:/... See more...
Hi Splunk Community, I have a new issue concerning this config, some particular behaviour that I don't understand Here is my configuration #classic input on TCP, this is syslog logs [tcp://22000] sourcetype = mydevice:sourcetype index = local_index _TCP_ROUTING = local_indexers:9997 # Idea is to clone the sourcetype, but not logs containing LAN1 and LAN2 logs, it's not necessary for the second splunk [mydevice-clone] CLONE_SOURCETYPE = mydevice:clone REGEX = ^((?!LAN1|LAN2).)*$ DEST_KEY = _SYSLOG_ROUTING FORMAT = sending_to_second_splunk # on the props I apply the configuration made on the transforms [mydevice:sourcetype] TRANSFORMS-clone = mydevice-clone #IP of the HF that will send data to second splunk [syslog:sending_to_second_splunk] server = 10.10.10.10:port type = tcp   Issue Encountered This configuration works partially: Data is properly indexed to the second Splunk, without LAN1 and LAN2 data. Data containing LAN1 and LAN2 is indexed on the local indexer. However, the sourcetype mydevice:clone is also indexed on my local indexer, resulting in some data being indexed twice with two different sourcetypes. I don't understand why this is happening and I am seeking help to resolve this issue, I have the feeling that I miss something Thanks, Nicolas
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query... See more...
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query and not sure if the "join" command is working. index=* | join Account_Name [ search index=* EventCode=4740 OR EventCode=4767 | eval login_account=mvindex(Account_Name,1) | bin span=4h  _time | stats count values(EventCode) as EventCodeList count(eval(match(EventCode,"4740"))) as Locked ,count(eval(match(EventCode,"4767"))) as Unlocked by Account_Name | where Locked >= 1 and Unlocked = 0 ] | stats count dc(login_account) as "UniqueAccount" values(login_account) as "Login_Account" values(host) as "HostName" values(Workstation_Name) as Source_Computer values(src_ip) as SourceIP by EventCode| where UniqueAccount >= 10
So if I'm not to use  | eval _raw="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres.................." Should I use  | eval msgTxt="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAd... See more...
So if I'm not to use  | eval _raw="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres.................." Should I use  | eval msgTxt="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres\":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City   And do I not include the /?
Hi @predatorz  These are just two of many components that make up the Splunk product and presumably abstracted away from Splunkd to prevent a huge monolithic system. The main Spunkd process will lau... See more...
Hi @predatorz  These are just two of many components that make up the Splunk product and presumably abstracted away from Splunkd to prevent a huge monolithic system. The main Spunkd process will launch child processes such as these depending on your configuration and features enabled. It sounds like Nessus is being overcautious here however if you require confirmation and exactly what the process is doing then I would recommend reaching out to Splunk Support or your Account Team who should be able to help further.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.  
Just heads up, it was indeed an issue with the extraction of the fields, my event are so big that splunk stops extracting fields at some point. Thanks all for the help  
Hi @msatish  I’m not familiar with the MOVEit product however I think I found their logging documentation which suggests that this is a windows based service from a company called Progress? If so i... See more...
Hi @msatish  I’m not familiar with the MOVEit product however I think I found their logging documentation which suggests that this is a windows based service from a company called Progress? If so it looks like the logs are written to the file system at C:\Program Files\MOVEit\Logs however this may vary depending upon the install location.  if this is the case then the best way to onboard these logs will be using a Splunk universal forwarder (UF) configured to monitor the relevant log location and send to your Splunk Cloud stack using the UF forwarding app which you can download from your stack which contains the relevant configuration to send the data.  One you are receiving the data you should verify the relevant props configs to ensure time and event parsing are optimum.  if I have got completely the wrong end of the stick then please provide a link to the product/Vendor that you are using and I will do some more investigation     Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
How to onboard MOVEit Server Database logs which is hosted on prem to Splunk Cloud? What is the preferred method?
Hi @Jessydan  Couldnt agree more with the others regarding stats - it seems you're having issues extracting your message/ID in these examples though - does the following work for you? I used your pr... See more...
Hi @Jessydan  Couldnt agree more with the others regarding stats - it seems you're having issues extracting your message/ID in these examples though - does the following work for you? I used your provided sample data in the below:   | windbag | head 2 | streamstats count as row_number | eval _raw=if(row_number==1, "{\"severity\":\"INFO\",\"logger\":\"com.PayloadLogger\",\"thread\":\"40362833\",\"message\":\"RECEIVER[20084732]: POST /my-end-point Headers: {sedatimeout=[60000], x-forwarded-port=[443], jmsexpiration=[0], host=[hostname], content-type=[application/json], Content-Length=[1461], sending.interface=[ANY], Accept=[application/json], cookie=[....], x-forwarded-proto=[https]} {{\\\"content\\\":\\\"Any content here\\\"}}\",\"properties\":{\"environment\":\"any\",\"transactionOriginator\":\"any\",\"customerId\":\"any\",\"correlationId\":\"any\",\"configurationId\":\"any\"}}", "{\"severity\":\"INFO\",\"logger\":\"com.PayloadLogger\",\"thread\":\"40362833\",\"message\":\"SENDER[20084732]: Status: {200} Headers: {Date=[Mon, 05 May 2025 07:27:18 GMT], Content-Type=[application/json]} {{\\\"generalProcessingStatus\\\":\\\"OK\\\",\\\"content\\\":[]}}\",\"properties\":{\"environment\":\"any\",\"transactionOriginator\":\"any\",\"customerId\":\"any\",\"correlationId\":\"any\",\"configurationId\":\"any\"}}") | spath input=_raw | rex field=message "^(?<msgType>[A-Z]+)\[(?<id>[0-9]+)\].*" | stats range(_time) as duration, count, values(msgType) as msgType by id | where isnotnull(msgType) AND msgType="RECEIVER" AND msgType="SENDER"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @RSS_STT  The predict command can take a number of fields, such as in this example below, allowing you to run the predict against all your drives. | makeresults count=5 | streamstats count | eva... See more...
Hi @RSS_STT  The predict command can take a number of fields, such as in this example below, allowing you to run the predict against all your drives. | makeresults count=5 | streamstats count | eval instance = case(count%3==1, "C:", count%3==2, "D:", true(), "E:") | eval Value = case(instance=="C:", 90 - count*5, instance=="D:", 80 - count*4, instance=="E:", 70 - count*3) | append [| makeresults count=5 | eval _time = relative_time(now(), "-1h") | streamstats count | eval instance = case(count%3==1, "C:", count%3==2, "D:", true(), "E:") | eval Value = case(instance=="C:", 880 - count*5, instance=="D:", 82 - count*4, instance=="E:", 70 - count*3)] | fields _time, instance, Value | timechart min(Value) as "FreeSpace" by instance | fillnull "C:" "D:" "E:" | predict "C:" "D:" "E:" algorithm=LLP5 future_timespan=180    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Apparently you have problems with proper extraction of the message field. So you should verify your data onboarding and start with fixing the extractions.
I'm a bit puzzled now, while both of the queries you proposed me are working, they raise the same issue.  With this one I get the expected output, but with way less transactions than expected (lik... See more...
I'm a bit puzzled now, while both of the queries you proposed me are working, they raise the same issue.  With this one I get the expected output, but with way less transactions than expected (like 10 instead of 100)  ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | stats list(_raw) as _raw list(message) as message min(_time) as start_time max(_time) as end_time by id | eval duration=end_time - start_time, eventcount=mvcount(_raw) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw And with this one, I have the same issue where _raw contains the response, but I don't get it in the response field, it is properly present for only around 10% of the transactions index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | eval request=if(searchmatch("SENDER[",message,null()) | eval response=if(searchmatch("\"RECEIVER[\" AND \"POST /my-end-point*\"",message,null()) | stats range(_time) as duration, count, values(request) as request, values(response) as response, values(_raw) as _raw by id | where isnotnull(request)    
Assuming instance contains the disk you want to predict, you could try something like this index=main host="localhost" instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | eval ... See more...
Assuming instance contains the disk you want to predict, you could try something like this index=main host="localhost" instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | eval instance=substr(instance,0,1) | timechart min(value) as "Used Space" by instance | appendpipe [| fields _time C | where isnotnull(C) | predict C algorithm=LLP5 future_timespan=180] | appendpipe [| fields _time D | where isnotnull(D) | predict D algorithm=LLP5 future_timespan=180] | appendpipe [| fields _time E | where isnotnull(E) | predict E algorithm=LLP5 future_timespan=180]
It's hard to say what's "wrong" not knowing your data but while transaction can be sometimes useful (in some strange use cases) it's often easier, and faster to simply use stats. Mostly because trans... See more...
It's hard to say what's "wrong" not knowing your data but while transaction can be sometimes useful (in some strange use cases) it's often easier, and faster to simply use stats. Mostly because transaction has loads of limitations that stats don't have. Quick glance at your search suggests that for some reason the message field is not extracted properly from your event so you're not getting two separate values in your multivalued message output field. As I said I'd go with index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | eval request=if(searchmatch("SENDER[",message,null()) | eval response=if(searchmatch("\"RECEIVER[\" AND \"POST /my-end-point*\"",message,null()) | stats range(_time) as duration, count, values(request) as request, values(response) as response, values(_raw) as _raw by id  
transaction can silently ignore data, depending on data volume, time between start and end and you will not get any indication that data has been discarded. It's far better to use stats to group by ... See more...
transaction can silently ignore data, depending on data volume, time between start and end and you will not get any indication that data has been discarded. It's far better to use stats to group by id - which you appear to have. At the simplest level you can replace transaction with stats like this index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | stats list(_raw) as _raw list(message) as message min(_time) as start_time max(_time) as end_time by id | eval duration=end_time - start_time, eventcount=mvcount(_raw) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw  
Ok, a word of advise - it's usually better to specify indexes explicitly than to have them as searched by default. Especially with an admkn role! It spares you unnecessary load on your environment fo... See more...
Ok, a word of advise - it's usually better to specify indexes explicitly than to have them as searched by default. Especially with an admkn role! It spares you unnecessary load on your environment for searches in which you haven't specified the indexes and it saves you a lot of debugging when you have different roles with different default indexes and people report mismatch in searches functionality. You have been warned. One additional hint - it's way better to do a quick check with | tstats count where index=rapid7 by sourcetype than index=rapid7 | stats count by sourcetype The first one only checks the summarized indexed fields while yours needs to plow through all events from the index. And there is something that doesn't add up. On Cloud you cannot have the admin user role. You can only have sc_admin (which is a limited admin role). So if you're trying to edit the admin role you shouldn't be able to do so.  
Not sure that's what you expect, let me know if you need something else, here are two raw events that my query matched together, but response is not being displayed (while present in the output _raw)... See more...
Not sure that's what you expect, let me know if you need something else, here are two raw events that my query matched together, but response is not being displayed (while present in the output _raw) {"severity":"INFO","logger":"com.PayloadLogger","thread":"40362833","message":"RECEIVER[20084732]: POST /my-end-point Headers: {sedatimeout=[60000], x-forwarded-port=[443], jmsexpiration=[0], host=[hostname], content-type=[application/json], Content-Length=[1461], sending.interface=[ANY], Accept=[application/json], cookie=[....], x-forwarded-proto=[https]} {{\"content\":"Any content here"}}","properties":{"environment":"any","transactionOriginator":"any","customerId":"any","correlationId":"any","configurationId":"any"}} {"severity":"INFO","logger":"com.PayloadLogger","thread":"40362833","message":"SENDER[20084732]: Status: {200} Headers: {Date=[Mon, 05 May 2025 07:27:18 GMT], Content-Type=[application/json]} {{\"generalProcessingStatus\":\"OK\",\"content\":[]}}","properties":{"environment":"any","transactionOriginator":"any","customerId":"any","correlationId":"any","configurationId":"any}} I've been trying to use stats as well but have more trouble than with the transaction, which works pretty well (despite this missing response field). Can't say im a splunk expert
Firstly, let me start by stating the obvious - vulnerability scanners are notorious for being way overly trigger-happy with their findings. It takes an experienced person to filter their results and ... See more...
Firstly, let me start by stating the obvious - vulnerability scanners are notorious for being way overly trigger-happy with their findings. It takes an experienced person to filter their results and get the actual reasonable results. Having said that - those processes are spawned by the splunkd process (not directly -  via compsup daemon). So that finding is at least questionable if not simply a false positive.
Please provide some sample data (anonymised) which demonstrate your issue Having said that, you could try using stats to gather your events by id as this is can be more deterministic than transaction
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Fre... See more...
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | timechart min(Value) as "Used Space" | predict "Used Space" algorithm=LLP5 future_timespan=180 Could anyone help with modified query.    
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two mess... See more...
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two messages: a RECEIVER log and a SENDER log. Here’s my current query: index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | transaction id startswith="RECEIVER" endswith="SENDER" mvlist=message | search eventcount > 1 | eval count=mvcount(message) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw The idea is to group together RECEIVER and SENDER logs using the transaction id that my logs creates (e.g., RECEIVER[52] and SENDER[52]), and then extract and separate the first and second messages of the transaction into request and response to have a better visualisation. The transaction command seems to be grouping the logs correctly, I get the right number of transactions, and both receiver and sender logs are present in the _raw field. For a few cases it works fine, I have as expected the proper request and response in two distinct fields, but for many transactions, the response (second message) is showing as NULL, even though eventcount is 2 and both messages are visible in _raw The message field is well present in both ends of the transaction, as I can see it in the _raw output. Can someone guide me on what is wrong with my query ?