Hi @SatriaCiso, Splunk Indexer Clusters (Single or Multi Site) have always one Cluster Manager and the Cluster runs also when the CM is down, so you don't need a second CM in your architecture. If ...
See more...
Hi @SatriaCiso, Splunk Indexer Clusters (Single or Multi Site) have always one Cluster Manager and the Cluster runs also when the CM is down, so you don't need a second CM in your architecture. If you want to have a second CM, it must have the same hostname and IP of the first and must be usually down, and turned on only when the first one is down. You can read this at https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Multisiteclusters and at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf Ciao. Giuseppe
Hi, I have the situation that the need for Installation management tier on 2 sites (DC and DR) on VPC server. the problem is, I don't have the permission to capture the Vmotion from 1 site to anothe...
See more...
Hi, I have the situation that the need for Installation management tier on 2 sites (DC and DR) on VPC server. the problem is, I don't have the permission to capture the Vmotion from 1 site to another. Why I need to install the management tier to both sites because, I want to upgrade the OS currently RHEL 7.9 (EOS soon) to RHEL 8.9, and the compan And, If I just Installed both sites, how to sync the data from 1 management tier to another, do I need to copy the data every day or just once. need help, Would Be Appreciated
Hi thanks for your response. I tried same and it is working. but it does not work on my real data. The problem is IP Address. When I removed src_ip from lookup command it works on real records. B...
See more...
Hi thanks for your response. I tried same and it is working. but it does not work on my real data. The problem is IP Address. When I removed src_ip from lookup command it works on real records. But I cannot understand what is the problem! I checked the name of the field in events and tried srcip and src_ip, both did not work. Regards
Thanks, @yuanliu this works! So does @ITWhisperer solution! I don't think it will allow me to select both as accepted solution, so click it for yours since you replied first. Thanks!!
| set diff [| tstats count where source_1 by host | table host] [| tstats count where source_2 by host | table host] That SPL provides a list of all of the hosts not seen in source_2 The se...
See more...
| set diff [| tstats count where source_1 by host | table host] [| tstats count where source_2 by host | table host] That SPL provides a list of all of the hosts not seen in source_2 The search is not wrong but the last statement is inaccurate because set diff as shown produces a list of all hosts in source_1 not seen in source_2, plus all hosts in source_2 not seen in source_1. (The statement is correct only if hosts in source_2 is a subset of that in source_1. Maybe this is a condition known in your use case?) So, it is equivalent to the search I posted in this one. To get list of only those hosts in source_1 that are not in source_2, use my search in this earlier one or, as @PickleRick suggested, improve it with tstats like | tstats values(host) as host
where source_1 NOT
[tstats values(host) as host
where source_2] If hosts in source_2 is a subset of that in source_1 as may be the case, this method will produce the exact same result, and will still be more efficient.
Splunk uses pcre but there is some difference. I have a hard time trusting it with multiline. Your code sample in my vanilla 9.1.2 installation, for example, results in 77777777* alone. Because yo...
See more...
Splunk uses pcre but there is some difference. I have a hard time trusting it with multiline. Your code sample in my vanilla 9.1.2 installation, for example, results in 77777777* alone. Because your ANSI 835 is strictly formatted, maybe split will suffice. | eval clmevent = mvindex(split(_raw, "
CLP*"), 1, -1) ``` extra newline is paranoia - sample works without ```
| mvexpand clmevent Your sample event gives me all three. Here is an emulation you can compare with real data | makeresults
| fields - _time
| eval _raw="N4*Carson*NV*89701~
PER*BL*Nevada Medicaid*TE*8776383472*EM*nvmmis.edisupport@dxc.com~
N1*PE*SUMMER*XX*6666666666~
REF*TJ*111111111~
CLP*77777777*4*72232*0**MC*6666666666666~
CAS*OA*147*50016*0~
CAS*CO*26*22216*0~
NM1*QC*1*TOM*SMITH****MR*77777777777~
NM1*74*1*ALAN*PARKER****C*88888888888~
NM1*PR*2*PACIFI*****PI* 9999~
NM1*GB*1*BARRY*CARRY****MI*666666666~
REF*EA*8888888~
DTM*232*20180314~
DTM*233*20180317~
SE*22*0001~
ST*835*0002~
BPR*H*0*C*NON************20180615~
TRN*1*100004765*5555555555~
DTM*405*20180613~
N1*PR*DIVISON OF HEALTH CARE FINANCING AND POLICY~
N3*1100 East William Street Suite 101~
N4*Carson*NV*89701~
PER*BL*Nevada Medicaid*TE*8776383472*EM*nvmmis.edisupport@dxc.com~
N1*PE*VALLEY*XX*6666666666~
REF*TJ*530824679~
LX*1~
CLP*77777778*2*3002*0**MC*6666666666667~
CAS*OA*176*3002*0~
NM1*QC*1*BOB*THOMAS****MR*55555555555~
NM1*74*1*ALAN*JACKSON****C*66666666666~
REF*EA*8888888~
DTM*232*20171001~
DTM*233*20171002~
CLP*77777779*4*41231.04*0**MC*6666666666668~
CAS*OA*147*9365.04*0~
CAS*CO*26*31866*0~
NM1*QC*1*HELD*ALLEN****MR*77777777778~
NM1*74*1*RYAN*LARRY****C*88888888889~
NM1*PR*2*SENIOR*****PI* 8888~"
``` data emulation above ``` Hope this helps.
In the Windows event log I can see that some drivers are successfully installed by the update. And then I see these events: 01/23/2024 11:17:26 PM LogName=Application EventCode=11708 EventType=4 Com...
See more...
In the Windows event log I can see that some drivers are successfully installed by the update. And then I see these events: 01/23/2024 11:17:26 PM LogName=Application EventCode=11708 EventType=4 ComputerName=WIN10SERVER User=NOT_TRANSLATED Sid=S-1-5-21-451409098-3557801342-1863680623-1001 SidType=0 SourceName=MsiInstaller Type=Informationen RecordNumber=212304 Keywords=Klassisch TaskCategory=None OpCode=Info Message=Product: UniversalForwarder -- Installation failed. 01/23/2024 11:17:26 PM LogName=Application EventCode=1033 EventType=4 ComputerName=WIN10SERVER User=NOT_TRANSLATED Sid=S-1-5-21-451409098-3557801342-1863680623-1001 SidType=0 SourceName=MsiInstaller Type=Informationen RecordNumber=212305 Keywords=Klassisch TaskCategory=None OpCode=Info Message=Das Produkt wurde durch Windows Installer installiert. Produktname: UniversalForwarder. Produktversion: 9.1.3.0. Produktsprache: 1033. Hersteller: Splunk, Inc.. Erfolg- bzw. Fehlerstatus der Installation: 1603.
Hi all, today I successfully updated Splunk Enterprise to 9.1.3 (from 9.1.2) on a Windows 10 22H2 Pro machine with the newest Windows updates (January 2024). Then I wanted to update the Universal ...
See more...
Hi all, today I successfully updated Splunk Enterprise to 9.1.3 (from 9.1.2) on a Windows 10 22H2 Pro machine with the newest Windows updates (January 2024). Then I wanted to update the Universal Forwarder on this machine, too. Actually, there's 9.1.2 running and everything is working fine. But updating to 9.1.3 doesn't work. Near to the end of the installation process, the installation is rolled back to 9.1.2. Before the rollback there are coming up some more windows for a very short time. And then there are more then one message windows saying, that the installation failed. You then have to click on OK in every message window to finish successfully the rollback. I don't see why the update is failing. Does anyone have the same issue? And how did you solve this issue? Thank you.
With the update to Splunk enterprise 9.1.3 everything is looking fine. There are no more messages on the start page: "Laden der App-Liste nicht möglich. Aktualisieren Sie die Seite, um den Vorgang...
See more...
With the update to Splunk enterprise 9.1.3 everything is looking fine. There are no more messages on the start page: "Laden der App-Liste nicht möglich. Aktualisieren Sie die Seite, um den Vorgang zu wiederholen." and "Laden von gemeinsamen Aufgaben nicht möglich. Aktualisieren Sie die Seite, um den Vorgang zu wiederholen."
And how are you managing your license? If you have a cluster you should have a designated license master and manage your license there. Otherwise you'll get errors like this.
Either Splunk Cloud or Splunk Enterprise. There is no such thing as Splunk Cloud Enterprise. Also why do you want it to be a lookup? You can easily just use events in your table.
MV fields being stored as strings would make sense, or an object containing strings and a hidden delim field at least. I wish one of the architects would be able to give a technical talk on how this ...
See more...
MV fields being stored as strings would make sense, or an object containing strings and a hidden delim field at least. I wish one of the architects would be able to give a technical talk on how this stuff works more behind the scenes so we could help with more pointed debugging info. I considered not having strict typing on the kvstore collection as being the issue, but regardless it shouldn't be returning the wrong value instead of nulls. That would bring it back to the lookup command being the issue. I also think I have found a bug with mvmap that exhibits very similarly to this one. But since it was such an edge case I didn't even make a blog post about it. Hoping a splunk employee will see this and track down the issue behind the scenes.
Hi All, Just wanted to get your feedback on the below issue we have right now with our new Splunk Cloud instance. Unlike in enterprise version where you can assign the index to an app, we don...
See more...
Hi All, Just wanted to get your feedback on the below issue we have right now with our new Splunk Cloud instance. Unlike in enterprise version where you can assign the index to an app, we don't see the same option available in Splunk Cloud Version. Does anyone know know how Apps to which index to search without defining it? When you create new indexes, app column shows as 000-self-service and not the app we want to? Thank you
I have a file that's updated every 5 minutes, it's populated my capturing a value in a URL using python code. (the value is "OK" or "bad"). I want to use the new file (that created every 5 minutes) i...
See more...
I have a file that's updated every 5 minutes, it's populated my capturing a value in a URL using python code. (the value is "OK" or "bad"). I want to use the new file (that created every 5 minutes) in a splunk classic dashboard. I'm using the splunk cloud enterprise, and I'm not sure how to go about automating this process. Is there a way to update/replace a file in the lookup table files? Or some other way I can go about adding in the new file after every refresh to the dashboard?
Well, you can to some extent treat SPL as dynamically typed - the fields are created on the fly and there is really not much way to force a field to be of a given type. The tonumber() seems to be doi...
See more...
Well, you can to some extent treat SPL as dynamically typed - the fields are created on the fly and there is really not much way to force a field to be of a given type. The tonumber() seems to be doing the trick for converting to number but indeed a number string seems to be treated like a number. I remember there used to be some cases with displaying values which suggest that multivalue fields are also stored as strings. So it seems (but that's just an observation from outside, I'm no Splunk developer) that values are internally, at least to some extent stored as strings.
I wouldn't expect it to be an issue in a dynamically typed language either. In my other response to scelikok, I included a search where the typeof() command either isn't showing the real type of the ...
See more...
I wouldn't expect it to be an issue in a dynamically typed language either. In my other response to scelikok, I included a search where the typeof() command either isn't showing the real type of the field, or confirming it is a memory allocation/referencing issue because I think the c++ tostring() function re-allocates the variables (its been a while so I could be wrong though). Do you know if typeof() is returning the actual type of the variable? Or is it all stored as an object behind the scenes so the best typeof() can do is guess based on the contents? Even when I am casting tostring() Splunk still calls it a number. Not calling it a float or an int makes me think it is taking a best guess based on object contents. Reposting the other response's search below: | inputlookup kvstore_560k_lines_long max=15000 | stats count by secondUID | where count=1 AND match(secondUID, "^\d+$") | head 4000 | eval initialType=typeof(secondUID) | eval secondUID=tostring(secondUID) | eval subsiquentType=typeof(secondUID) | where like(initialType, subsiquentType) ```all of the types still read as "numbers" so no events are removed``` My biggest clue that this is abnormal is the event count affecting the % of lookup error in the initial search. If I run the same events through the search that have errors initially but with a smaller search event count, they return the correct results.
I terminated an aws instance that also happens to be my cluster manager, so now when I created another cluster manager I get an error message saying that my license is already in use when I add the s...
See more...
I terminated an aws instance that also happens to be my cluster manager, so now when I created another cluster manager I get an error message saying that my license is already in use when I add the same license to my nodes. Please how can I solve this issue. Thank you. Error [00020000] Instance name "indexer02" License key 5C52DA5145AD67B8188604C49962D12F2C3B2CF1B82A6878E46F68CA2812807B used by peer is already in use by another peer in the deployment. Last Connect Time:2024-01-23T19:23:35.000+00:00; Failed 6 out of 6 times.