All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you all very much for the help, so the issue was related to the solution @ITWhisperer gave. In my search I was referencing the table for the lookup, that should have been the definition that I ... See more...
Thank you all very much for the help, so the issue was related to the solution @ITWhisperer gave. In my search I was referencing the table for the lookup, that should have been the definition that I created.   Thanks again for all the help. 
I was also facing the same issue. You need to instal the Java and then restart splunk you would be able to see it.
Hi, I changed my compnay and I'd like to change the Community eMail, but every time I do it, after some hours I have again the previous eMail. I Opened a case to Splunk Support but without success.... See more...
Hi, I changed my compnay and I'd like to change the Community eMail, but every time I do it, after some hours I have again the previous eMail. I Opened a case to Splunk Support but without success. How can I solve my issue? Thank you for your support. Ciao. Giuseppe
This is the old way of using the custom JS and CSS for react visualisation, instead can you follow new framework to develop react app 
@yuanliu 01100011 was _not_ hex. It was binary for 0x63. That's why I'm completely confused by  @PickleRick Lol you wouldn't believe how much time I spent trying to decipher the OP's intent from... See more...
@yuanliu 01100011 was _not_ hex. It was binary for 0x63. That's why I'm completely confused by  @PickleRick Lol you wouldn't believe how much time I spent trying to decipher the OP's intent from the various posted replies to everybody's attempt to help.  After hours of scrolling up and down, back and forth, I distilled the instructions into the following algorithm given an even numbered HEX string, e.g., aabbcc Break the string into 2-HEX chunks. (OP used the term 2-bytes - I realize that is actually 4-bytes) Convert each chunk into binary. Reverse the order of the binary chunks. Count the positions of nonzero bits of the full reversed binary string from the right. (As I said, I can't think of a practical purpose of this exercise. By the way, to anyone who is going to ask a question here, even though I strongly encourage describing problem without SPL first, please make the description as algorithmic as possible.)  As a weird game, this applies to any even-length HEX string.  Here's a sequence of up to 16 HEX characters. hex padded_binary nonzero_bits 01 00000001 0 0002 00000010 00000000 9 000003 00000011 00000000 00000000 16 17 00000004 00000100 00000000 00000000 00000000 26 0000000005 00000101 00000000 00000000 00000000 00000000 32 34 000000000006 00000110 00000000 00000000 00000000 00000000 00000000 41 42 00000000000007 00000111 00000000 00000000 00000000 00000000 00000000 00000000 48 49 50 0000000000000008 00001000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 59 90000000000000 00000000 00000000 00000000 00000000 00000000 00000000 10010000 4 7 a00000000000 00000000 00000000 00000000 00000000 00000000 10100000 5 7 b000000000 00000000 00000000 00000000 00000000 10110000 4 5 7 c0000000 00000000 00000000 00000000 11000000 6 7 d00000 00000000 00000000 11010000 4 6 7 e000 00000000 11100000 5 6 7 f0 11110000 4 5 6 7 Another thing I realize is that I must handle 2-HEX (single-chunk) specially.  Here is the emulation code   | makeresults format=csv data="hex 01 0002 000003 00000004 0000000005 000000000006 00000000000007 0000000000000008 90000000000000 a00000000000 b000000000 c0000000 d00000 e000 f0" ``` data emulation above ``` | eval idx = mvrange(0, len(hex) / 2) | eval reverse2hex = mvreverse(mvmap(idx, substr(hex, idx*2 + 1, 2))) | eval ASbinary=if(idx < 1, tostring(tonumber(reverse2hex,16),"binary"), mvmap(reverse2hex, tostring(tonumber(reverse2hex,16),"binary"))) | eval padded_binary = if(idx < 1, printf("%08d", ASbinary), mvmap(ASbinary, printf("%08d", ASbinary))) | eval reverse_bits = mvreverse(mvmap(padded_binary, split(padded_binary, ""))), position = -1 | foreach reverse_bits mode=multivalue [eval position = position + 1, nonzero_bits = if(<<ITEM>> == 0, nonzero_bits, mvappend(nonzero_bits, position))] | fields hex padded_binary nonzero_bits   (Technically this works for odd number of HEX characters, too, if OP can define where to split.) 
Convert the pcap file to a text file before ingesting into splunk
Try something like this index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 | eval DAVESessionID=if(policy_id="FETCH-DAVESESSION-ID" AND... See more...
Try something like this index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 | eval DAVESessionID=if(policy_id="FETCH-DAVESESSION-ID" AND action="create_ticket",session_id,null()) | eventstats values(DAVESessionID) as DAVESessionID by device_session_id | where policy_id="framework" AND action="session_end" AND error_code=9999
## Solution found: - Issue was the windows defender firewall for outbound traffic in the windows 10 (UF machine). Added a new outbound rule for any traffic outgoing via splunkd.exe. And now I can se... See more...
## Solution found: - Issue was the windows defender firewall for outbound traffic in the windows 10 (UF machine). Added a new outbound rule for any traffic outgoing via splunkd.exe. And now I can see the device in Forwarder management.
Tried fresh installation with config for DS as well, didnt work.  
Check here under "Join datasets on fields that have different names". You may want to test by assigning aliases to see what populates from which side of the join.  Furthermore, perform an additio... See more...
Check here under "Join datasets on fields that have different names". You may want to test by assigning aliases to see what populates from which side of the join.  Furthermore, perform an additional table statement after your join to pull in all of the data and troubleshoot from there. --- If this reply helps you, Karma would be appreciated.
No. Your AIO (all-in-one) box which works as SH and indexer can also be a DS. (And it tries to be since you have the forwarder management section enabled in your gui).
This is my query that isn't working as expected.   index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 policy_id=framework action=sessio... See more...
This is my query that isn't working as expected.   index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 policy_id=framework action=session_end "error_code"=9999 "*" | table julie_date_time, event_name, proxy_id, error_code, session_id, device_session_id, result |rename session_id as JulieSessionId |join type=left device_session_id [search index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 policy_id="FETCH-DAVESESSION-ID" action=create_ticket |table timeDave, device_session_id, session_id |rename session_id as DAVESessionID] Assume Primary Query returns data like following: julie_date_time event_name proxy _id error_code Juliesession_id device_session_id result 2024-09-20T23:53:53 Login 199877 9999 1a890963 f5318902 pass 2024-09-19T08:20:00 View Profile 734023 9999 92xy9125 81b3e713 pass 2024-09-17T11:23:45 Change Profile 089234 9999 852rs814 142z7x81 pass   Requirement:  I want to add the DAVEsession_ID to the above table when the following query returns something like: timeDave event_name DAVEsession_id device_session_id 2024-09-20T23:53:50 Login 1a890963 f5318902 2024-09-19T08:19:58 View Profile 92xy9125 81b3e713 2024-09-17T11:23:40 Change Profile 852rs814 142z7x81   Expected Outcome: julie_date_time event_name proxy _id error_code Juliesession_id device_session_id result timeDave DAVEsession_id 2024-09-20T23:53:50 Login 199877 9999 1a890963 f5318902 pass 2024-09-20T23:53:50 1a890963 2024-09-19T08:19:58 View Profile 734023 9999 92xy9125 81b3e713 pass 2024-09-19T08:19:58 92xy9125 2024-09-20T23:53:53 Change Profile 089234 9999 852rs814 142z7x81 pass 2024-09-17T11:23:40 852rs814
So do I need another VM setup as the Deployment server? I saw 1 or 2 videos where they said since it's a simple lab setup and only one local forwarder, don't need deployment server config.
You skipped the DS configuration so your UF is _not_ managed by the DS. You can still configure your UF manually and if you properly pointed it to the indexer, you should see the internal UF's logs ... See more...
You skipped the DS configuration so your UF is _not_ managed by the DS. You can still configure your UF manually and if you properly pointed it to the indexer, you should see the internal UF's logs in the _internal index but you can't manage the UF until you point it at DS See https://docs.splunk.com/Documentation/Splunk/latest/Updating/Configuredeploymentclients
Hi, I have setup 2 VMs in Virtual box, installed the Splunk Enterprise in Windows server 2022, and installed the universal forwarder in windows 10 VM. I have enabled listening port 9997 in Splunk E... See more...
Hi, I have setup 2 VMs in Virtual box, installed the Splunk Enterprise in Windows server 2022, and installed the universal forwarder in windows 10 VM. I have enabled listening port 9997 in Splunk Enterprise. While installing UF, I have skipped the deployment server config (let it empty), and entered the IP of Windows server machine in the receiving indexer window. Then I checked the connection from UF machine to Splunk enterprise by this PS command: Test-NetConnection -Computername xxx.xxx.x.xxx -port 9997     (Successful) and from Splunk to Universal forwarder Test-NetConnection -Computername xxx.xxx.x.xxx     (Successful) So connection is up and running between the 2 devices. But then in Splunk Enterprise, when I go to Settings > Forwarder Management, I cannot see the windows client. Same issue in Settings > Add Data > Forward "There are currently no forwarders configured as deployment clients to this instance" === > What am i doing wrong? Did i skip any configuration? Can someone help PLEASE?
Well, that's because gui is... well, it's good for some entry-level administration and generally, mostly for all-in-one setups. When you're creating an input, server presents you with list of indexe... See more...
Well, that's because gui is... well, it's good for some entry-level administration and generally, mostly for all-in-one setups. When you're creating an input, server presents you with list of indexes that the server knows and these are the indexes defined in indexes.conf. That's why you want that file to be distributed across your environment so SHs know what to hint in the search window and HFs (in your context a DS is essentially a HF in this case) can present a list of destination indexes to choose from. And that's one of the reasons why gui administration is not enough in a more complicated setup.
@yuanliu 01100011 was _not_ hex. It was binary for 0x63. That's why I'm completely confused by @smanojkumar 's explanation as to how the algorithm is supposed to work. Does it work on 16-bit integers... See more...
@yuanliu 01100011 was _not_ hex. It was binary for 0x63. That's why I'm completely confused by @smanojkumar 's explanation as to how the algorithm is supposed to work. Does it work on 16-bit integers only? Does it work on any length stream of data? Does it always produce 32-bit integers? Or does the result grow with the length of the argument? It's so badly specified...
@smanojkumar Can you confirm that results you are looking for are like the following? hex padded_binary nonzero_bits 0002 00000010 00000000 9 00200100 00000000 00000001 0010000... See more...
@smanojkumar Can you confirm that results you are looking for are like the following? hex padded_binary nonzero_bits 0002 00000010 00000000 9 00200100 00000000 00000001 00100000 00000000 13 16 01100011 00010001 00000000 00010000 00000001 0 12 24 28 This sounds like some data compression game.  I can't think of a practical reason to do this in SPL.  Is this some sort of homework? Anyway, here is a more or less literal way to interpret your instructions:     | eval idx = mvrange(0, len(hex) / 2) | eval reverse2hex = mvreverse(mvmap(idx, substr(hex, idx*2 + 1, 2))) | eval ASbinary=if(idx < 1, tostring(tonumber(reverse2hex,16),"binary"), mvmap(reverse2hex, tostring(tonumber(reverse2hex,16),"binary"))) | eval padded_binary = if(idx < 1, printf("%08d", ASbinary), mvmap(ASbinary, printf("%08d", ASbinary))) | eval reverse_bits = mvreverse(mvmap(padded_binary, split(padded_binary, ""))), position = -1 | foreach reverse_bits mode=multivalue [eval position = position + 1, nonzero_bits = if(<<ITEM>> == 0, nonzero_bits, mvappend(nonzero_bits, position))] | fields hex padded_binary nonzero_bits     Note mvreverse on padded binary is sort of expensive and can be avoided by arithmetics if there are lots of data. Here is the emulation of the three examples you give:     | makeresults format=csv data="hex 0002 00200100 01100011" ``` data emulation above ```     Apply the algorithm to this emulation gives the results tabulated at the top.  
Hi @Gregski11 , I don't know your infrastructure, but a Windows DS can be used without issues if you have to manage only Windows servers, if you want to manage Linux servers, using a Windows DS you ... See more...
Hi @Gregski11 , I don't know your infrastructure, but a Windows DS can be used without issues if you have to manage only Windows servers, if you want to manage Linux servers, using a Windows DS you lose the grants configurations so you cannot use scripts inputs. Anyway, all the Splunk servers should directly send their logs to the Indexers (also DS) and you can do this by GUI in [Settings > Forwarding and Receiving > Forwarding], setting up the destination Indexers. If you are using to deploy an outputs.conf to your managed servers, you can use it (uploading) without making a manual configuration (I prefer this solution, than manually manage!). You don't need to access conf files if you send clear text logs, if you are using a certificate (even if Splunk auto generated)), you need to manually modify a conf file. About how to configure inputs, I don't like to use the Settings > Inputs feature, because you need to manually manage it, it's better to use the same Splunk_TA_Windows that you deployed to the Windows Servers, and you can manually upload it, without accessing the CMD environment. Ciao. Giuseppe
The Deployment Servers should be connected to the indexers already so they can index their logs.  To tell the DSs what indexes are available, install the same indexes.conf file (in an app) that you ... See more...
The Deployment Servers should be connected to the indexers already so they can index their logs.  To tell the DSs what indexes are available, install the same indexes.conf file (in an app) that you put on the search heads.  That should let you select the desired destination index from the UI.  If that doesn't work, just edit inputs.conf (in an app).