I have server "X" on which is installed a universal forwarder.
Typically, I'd use the universal forwarder's cron functionality to trigger the execution of a PowerShell script. The PowerShell script will have been implemented using PowerShell modular input to send the data to an indexer, i.e., the script emits a stream of .NET objects and Splunk does the right thing with them.
Now, I have a PowerShell script whose execution is triggered by an event external to the universal forwarder. This script will also emit a stream of .NET objects and I want to use PowerShell modular input to send data to an indexer.
How to externally trigger the universal forwarder to send data using PowerShell modular input to an indexer?
I would appreciate it if you'd provide locations of and examples of *.conf files
Back to my first answer, schedule your powershell script with windows task scheduler, make your powershell send data to splunk by any means necessary.
Once I used powershell to send data to splunk indexers via UDP. Once i used powershell to write data to wineventlogs and then I indexed the winevent logs using a splunk universal forwarder. Once I used powershell to write to csv and then read the csv with splunk UF. One time I used powershell and winrm to get data off remote servers that didn't have splunk installed at all, and I put that data into a local csv file and ingested that. I didn't have to use the splunk scheduler. I did it with windows tasks scheduler.
If you don't like these options and splunk as such, can you at least say what product will do what you want to do so we can figure out how to build a comparable solution in splunk?
Please let me try from a different angle (with some more detail). Part A is what is currently being done. Part B is where I want to go.
I have a working PowerShell modular app that I've deployed to a Windows server. The Windows server has a universal forwarder. The app is currently being executed by the U.F. based on the app's scheduled task settings in its inputs.conf. The PowerShell modular app does not create an output file to be monitored by Splunk. Instead, the PowerShell modular app reads text files and composes a list of .NET objects and outputs those objects using
$array | Select-Object. This works great
I don't want the U.F. to execute the PowerShell modular app any more. So, I removed the scheduled task settings from the app's inputs.conf so that the U.F. no longer executes the PowerShell modular app. Instead, I want to execute the PowerShell modular app using another PowerShell script (unknown to Splunk) that is monitoring the text files. I've tried to test this approach by manually executing the PowerShell modular app, but when I do, the data emitted by the PowerShell modular app never arrives at the indexer.
How do I execute the PowerShell modular app external to the U.F. and have the data be sent to the indexer?
Note: Based on processing requirements, I cannot have Splunk monitor the text files directly
These are the options:
write the data to a file and monitor the file with splunk UF
Send data from PS to UF via tcp/udp input
Send data from PS to HEC on indexers.
In either case you need to modify the existing ps script. If splunk doesn't execute the script it can't get a handle on the stdout from the script, and it won't see the data generated by the script unless you write the data to a file and read the file.
Now I've said this many different ways... does it click yet?
Not sure what you mean by "integrated". The script is a PowerShell modular input and it was deployed to the server using a deployment server. In that sense, it is integrated. THe only thing I'm trying to change is how its executed.
If by "integrated", you mean Splunk has to execute it, then your comment "If splunk doesn't execute the script it can't get a handle on the stdout". DOES click
Please tell me more about sending data from PS to UF via tcp/udp input. Do you have an example of a PS script doing this?
After reading your question and the back and forth comments, I think what you are looking for is this:
If you can't alter your PS script, this won't work. I can't think of a way to tell a modular input to watch another process's stdout, where that process starts/stops via some mechanism external to Splunk.
When you say universal forwarders CRON, you mean CRON from the LINUX OS right?
If you want to use the universal forwarders scheduler, then you would use inputs.conf and set an interval for your scripted input. See the docs on scripted inputs.
Apparently that is not the case as you have so eloquently put it... so then
A. Modular inputs require python, Universal Forwarders do not come with a python build natively
B. In order to use a Modular Input, you will need a full version of splunk (unless you're going to get hacky with python limitation above)
C. You could use the modular input on the indexers and use WinRM or PS Remoting of your choice, thats a perfectly acceptable answer
D. Why you come here and get mad? I do this for free, because I care... and you seem to be doing this at a cost to me, because you dont care.
Just saw your other question: "what are you talking about? "External event trigger splunk?"
I mean that my Windows server "X" (the one with the universal forwarder) has had a PowerShell Modular input app deployed to it. So, on my server, I have "C:\Program Files\SplunkUniversalForwarder\etc\apps\SA-ModularInput-PowerShellFoo\bin\powershell\foo.ps1".
Normally, foo.ps1 is executed by the universal forwarder as the result of the scheduled task configured in this app's inputs.conf. All this is deployed and configured and it works ok.
Now, in my case, I don't want the universal forwarder to execute foo.ps1. Instead, I want to execute foo.ps1 from another PowerShell script that is on this Windows server but have the universal forwarder forward the data created by foo.ps1 anyway
I've executed foo.ps1 manually by calling it from another PowerShell script on my Windows server, but the data created by foo.ps12 is not sent to my indexer.
How can I make this work?
| write-out path.to.file
On your ps, and then monitor path.to.file in inputs.conf
So your script will execute, it will put data in file, and splunk UF will gobble up the contents.
That's the easiest method imho. But you should probably make the data files uniquely named by appending epoch time or something to their file names. And monitor /path/to/logs/*.log instead for example.
You might also wish to destroy the data after reading to avoid clean up tasks. You can use the batch input for that with move_policy=sinkhole.
I'm mad because I don't appreciate being told "You have Splunk wrong.". I'm asking a question because I don't understand this aspect of Splunk. Am I supposed to know the answer before I post my question?
To answer the question you did ask: No, I don't mean Linux OS. I mean the Windows OS. This is a PowerShell question. When I say "universal forwarder's cron functionality", I mean the ability of the universal forwarder installed on my Windows server to run scheduled tasks by defining a schedule in the inputs.conf associated with the PowerShell modular input script deployed to my Windows server
If you'd like to understand more details of my scenario, please ask. I'd appreciate your help
"Not liking how Splunk needs to interpose itself such that it needs to supply the trigger itself"
You've got splunk wrong in the above, sorry you don't appreciate it. But you're making assumptions based on a limited understanding and terminology confusion. I'm just tryin to correct your assumption by explaining your options and telling you where you're wrong. That's how most seem to learn here versus going combative. But please let's continue...
Use powershell to create a file with the events and monitor the file with the universal forwarder.
See this document for details on using powershell inputs:
I guess so, although having to create a file isn't desired. The script emits a stream of objects using Select-Obj. I was hoping there was a way to feed these to the Modular Input infrastructure directly. From your answer, I'll assume there is no way to trigger Splunk from my script. As a monitoring tool, Splunk should always only needed on the output side of things. Not liking how Splunk needs to interpose itself such that it needs to supply the trigger itself
there are so many options here, you've got splunk wrong.
You can do anything with splunk. You don't need a UF but who am I to tell you how to write powershell to post to an http event collector or directly to the API, etc? The documentation for adding data to splunk is robust, I gave you one option.
Please re-read my question. I think my requirements were clear: I have a PowerShell script whose execution is triggered by an event external to the universal forwarder. This script will also emit a stream of .NET objects and I want to use PowerShell modular input to send data to an indexer.
How do I externally trigger the universal forwarder to send data using PowerShell modular input to an indexer?
I know "I have Splunk wrong". That's why I posted my question. Providing an answer that ignores my requirements isn't helpful
See inputs.conf docs for your options.
You can stream over TCP, or send via UDP, you could use powershell to write a file on the indexer and monitor it there.
If you really must use the modular input, then you need splunk... so what you could do is run the powershell script on the indexer, use WinRM to manage your .net server remotely and spit all the data into the modular input of the indexer.
You have a misunderstanding of what the powershell modular input is. It's not made for receiving data from remotely hosted powershell scripts. It's made for indexing data directly from locally executed ps scripts. Scripts that will be executed by Splunk user as scheduled in splunk.